Abstract

Federated Learning is a Privacy-Preserving Alter-Native for Distributed Learning with No Involvement of Data Transfer. as the Server Does Not Have Any Control on Clients' Actions, Some Adversaries May Participate in Learning to Introduce Corruption into the Underlying Model. Backdoor Attacker is One Such Adversary Who Injects a Trigger Pattern into the Data to Manipulate the Model Outcomes on a Specific Sub-Task. This Work Aims to Identify Backdoor Attackers and to Mitigate their Effects by Isolating their Weight Updates. Leveraging the Correlation between Clients' Gradients, We Propose Two Graph Theoretic Algorithms to Separate Out Attackers from the Benign Clients. under a Classification Task, the Experimental Results Show that Our Algorithms Are Effective and Robust to the Attackers Who Add Backdoor Trigger Patterns at Different Location in Targeted Images. the Results Also Evident that Our Algorithms Are Superior Than Existing Methods Especially When Numbers of Attackers Are More Than the Normal Clients.

Department(s)

Computer Science

Keywords and Phrases

backdoor; Federated learning; robustness; tar-geted attackers

International Standard Book Number (ISBN)

978-166549427-4

Document Type

Article - Conference proceedings

Document Version

Citation

File Type

text

Language(s)

English

Rights

© 2023 Institute of Electrical and Electronics Engineers, All rights reserved.

Publication Date

01 Jan 2023

Share

 
COinS