Abstract

Federated learning distributes model training among multiple clients who, driven by privacy concerns, perform training using their local data and only share model weights for iterative aggregation on the server. In this work, we explore the threat of collusion attacks from multiple malicious clients who pose targeted attacks (e.g., label flipping) in a federated learning configuration. By leveraging client weights and the correlation among them, we develop a graph-based algorithm to detect malicious clients. Finally, we validate the effectiveness of our algorithm in presence of varying number of attackers on a classification task using a well-known Fashion-MNIST dataset.

Department(s)

Computer Science

Keywords and Phrases

Attacker; correlation; federated learning

International Standard Book Number (ISBN)

978-166540926-1

Document Type

Article - Conference proceedings

Document Version

Final Version

File Type

text

Language(s)

English

Rights

© 2023 Institute of Electrical and Electronics Engineers, All rights reserved.

Publication Date

01 Jan 2022

Share

 
COinS