Resisting Backdoor Attacks in Federated Learning via Bidirectional Elections and Individual Perspective

Authors

  • Zhen Qin College of Computer Science and Technology, Zhejiang University, Hangzhou, China
  • Feiyi Chen College of Computer Science and Technology, Zhejiang University, Hangzhou, China
  • Chen Zhi School of Software Technology, Zhejiang University, Ningbo, China
  • Xueqiang Yan Huawei Technologies Co. Ltd., Shanghai, China
  • Shuiguang Deng College of Computer Science and Technology, Zhejiang University, Hangzhou, China

DOI:

https://doi.org/10.1609/aaai.v38i13.29385

Keywords:

ML: Distributed Machine Learning & Federated Learning

Abstract

Existing approaches defend against backdoor attacks in federated learning (FL) mainly through a) mitigating the impact of infected models, or b) excluding infected models. The former negatively impacts model accuracy, while the latter usually relies on globally clear boundaries between benign and infected model updates. However, in reality, model updates can easily become mixed and scattered throughout due to the diverse distributions of local data. This work focuses on excluding infected models in FL. Unlike previous perspectives from a global view, we propose Snowball, a novel anti-backdoor FL framework through bidirectional elections from an individual perspective inspired by one principle deduced by us and two principles in FL and deep learning. It is characterized by a) bottom-up election, where each candidate model update votes to several peer ones such that a few model updates are elected as selectees for aggregation; and b) top-down election, where selectees progressively enlarge themselves through picking up from the candidates. We compare Snowball with state-of-the-art defenses to backdoor attacks in FL on five real-world datasets, demonstrating its superior resistance to backdoor attacks and slight impact on the accuracy of the global model.

Published

2024-03-24

How to Cite

Qin, Z., Chen, F., Zhi, C., Yan, X., & Deng, S. (2024). Resisting Backdoor Attacks in Federated Learning via Bidirectional Elections and Individual Perspective. Proceedings of the AAAI Conference on Artificial Intelligence, 38(13), 14677-14685. https://doi.org/10.1609/aaai.v38i13.29385

Issue

Section

AAAI Technical Track on Machine Learning IV