Abstract

Federated Learning (FL) is a paradigm that enables collaborative machine learning without disclosing the local data of the participants. However, in real-world FL deployment scenarios, some unscrupolous clients may alter the training process to skew the global model towards their local optimum, unfairly prioritizing their data distribution. Their influence can degrade overall model performance for normal clients and reduce fairness in the system. We call this novel category of misbehaving clients 'selfish'. This work proposes a Fair and Robust strategy for aggregation in the Federated Learning (FL) server to mitigate the effect of Selfish clients (FairRFL). FairRFL incorporates a novel technique to recover (or estimate) the true updates from selfish clients by using robust statistics, specifically the median of norms. The presented strategy, through the inclusion of the recovered updates in the aggregation process, is robust against selfish behavior. Through extensive empirical evaluations with WISDM-W and CIFAR-10 datasets, we observe that a selfish client can increase the model accuracy on its data by up to 39% and more than quadruple the accuracy variance among clients, which FairRFL can address perfectly and recover performance fairness across normal clients.

Department(s)

Computer Science

Comments

European Commission, Grant PE00000013

Keywords and Phrases

Distributed systems; fairness; federated learning (FL); reliability and robustness

International Standard Serial Number (ISSN)

2168-6750

Document Type

Article - Journal

Document Version

Citation

File Type

text

Language(s)

English

Rights

© 2026 Institute of Electrical and Electronics Engineers, All rights reserved.

Publication Date

01 Jan 2026

Share

 
COinS