"Robust Federated Learning Against Targeted Attackers using Model Updat" by Priyesh Ranjan, Ashish Gupta et al.
 

Robust Federated Learning Against Targeted Attackers using Model Updates Correlation

Abstract

Robust federated learning is an emerging paradigm in machine learning that addresses the challenges of training accurate and secure models in decentralized and privacy-constrained environments. By leveraging the power of collaborative learning, this paradigm also ensures robustness against model attackers. However, federated learning setups are especially vulnerable against various targeted attacks including label-flipping and backdoor attacks. To combat this, similarity between client weight updates has gained increased traction as a reliable metric for attacker detection. In this chapter, we describe some of the works tackling targeted attacks by leveraging model similarity. We then present a graph theoretic formulation that leverages model correlations and introduce two novel graph theoretic algorithms MST-AD and Density-AD for the detection of targeted adversaries. The limitations of similarity based algorithms in distributed attack settings are then acknowledged. To combat these attacks, we introduce a divergence-based algorithm called Div-DBAD and establish its superiority on distributed backdoor attacks done on the setup. Experimental analysis on two standard machine learning datasets establishes the superiority of the Density-AD and the MST-AD algorithms against targeted attacks and the Div-DBAD algorithm against distributed backdoor attacks. For both the scenarios, the proposed algorithms are able to outperform the existing state of the art and maintain a lower success rate for the attacks while observing minimal drops in model performance.

Department(s)

Computer Science

Comments

National Science Foundation, Grant CNS-2008878

International Standard Serial Number (ISSN)

1931-6836; 1931-6828

Document Type

Article - Journal

Document Version

Citation

File Type

text

Language(s)

English

Rights

© 2025 Springer, All rights reserved.

Publication Date

01 Jan 2025

Share

 
COinS
 
 
 
BESbswy