Abstract
This paper studies federated learning (FL) with non-identical data generating distributions and under the notion of proportional fairness. Users are partitioned into disjoint clusters based on their data distributions, and a distinct model is learnt for each cluster. Different than previous work that considered clustering to optimize personalized learning performance, here clustering is inspected under the disparate impact doctrine which requires that protected classes (e.g., those of certain race or gender) have representations in every cluster that are approximately equal to their representation in the overall set of users. A family of iterative algorithms that balance the learning performance and proportional fairness through cluster assignments as randomized functions of the learning losses is proposed. The trade-off induced by our algorithms between accuracy of cluster estimation and the introduced randomization level is characterized. The proposed algorithm is examined on a real dataset to evaluate its performance.
Recommended Citation
M. Nafea et al., "Proportional Fair Clustered Federated Learning," IEEE International Symposium on Information Theory - Proceedings, pp. 2022 - 2027, Institute of Electrical and Electronics Engineers, Jan 2022.
The definitive version is available at https://doi.org/10.1109/ISIT50566.2022.9834545
Department(s)
Electrical and Computer Engineering
International Standard Book Number (ISBN)
978-166542159-1
International Standard Serial Number (ISSN)
2157-8095
Document Type
Article - Conference proceedings
Document Version
Citation
File Type
text
Language(s)
English
Rights
© 2024 Institute of Electrical and Electronics Engineers, All rights reserved.
Publication Date
01 Jan 2022