Abstract

This paper studies federated learning (FL) with non-identical data generating distributions and under the notion of proportional fairness. Users are partitioned into disjoint clusters based on their data distributions, and a distinct model is learnt for each cluster. Different than previous work that considered clustering to optimize personalized learning performance, here clustering is inspected under the disparate impact doctrine which requires that protected classes (e.g., those of certain race or gender) have representations in every cluster that are approximately equal to their representation in the overall set of users. A family of iterative algorithms that balance the learning performance and proportional fairness through cluster assignments as randomized functions of the learning losses is proposed. The trade-off induced by our algorithms between accuracy of cluster estimation and the introduced randomization level is characterized. The proposed algorithm is examined on a real dataset to evaluate its performance.

Department(s)

Electrical and Computer Engineering

International Standard Book Number (ISBN)

978-166542159-1

International Standard Serial Number (ISSN)

2157-8095

Document Type

Article - Conference proceedings

Document Version

Citation

File Type

text

Language(s)

English

Rights

© 2024 Institute of Electrical and Electronics Engineers, All rights reserved.

Publication Date

01 Jan 2022

Share

 
COinS