Abstract

Federated Learning (FL) is a distributed learning paradigm that leverages the computational strength of local devices to collaboratively train a model. The clients train the local model on their respective devices and submit the weight updates to the server for aggregation. This paradigm allows the clients to experience diverse data without sharing their local data with other participants or the server. However, FL is susceptible to backdoor attackers that deliberately train the model on altered data, essentially trying to get favor on a specific subtask separated from the main task. In this work, we focus on powerful backdoor attackers who play attacks in a distributed manner to strengthen their impact and to escape a strong detection method. We propose a novel defense algorithm against distributed backdoor attacks, which leverages dynamic model clipping and a reputation-based global model update by filtering adversarial update vectors. While requiring minimal changes to the standard FL framework, our algorithm can be used as a plug-in solution. By simulating various forms of backdoor attacks over three benchmark datasets, we find that with a negligible compromise on the overall performance of the model, our algorithm maintains a lower attack success rate and outperforms the prior solutions.

Department(s)

Computer Science

Keywords and Phrases

distributed backdoor; federated learning; poisoning attacks

Document Type

Article - Conference proceedings

Document Version

Citation

File Type

text

Language(s)

English

Rights

© 2025 Institute of Electrical and Electronics Engineers, All rights reserved.

Publication Date

01 Jan 2025

Share

 
COinS