Poisoning with Cerberus: Stealthy and Colluded Backdoor Attack against Federated Learning

Authors

  • Xiaoting Lyu Beijing Jiaotong University
  • Yufei Han INRIA
  • Wei Wang Beijing Jiaotong University
  • Jingkai Liu Beijing Jiaotong University
  • Bin Wang Zhejiang Key Laboratory of Multi-dimensional Perception Technology, Application and Cybersecurity
  • Jiqiang Liu Beijing Jiaotong University
  • Xiangliang Zhang University of Notre Dame

DOI:

https://doi.org/10.1609/aaai.v37i7.26083

Keywords:

ML: Distributed Machine Learning & Federated Learning, ML: Adversarial Learning & Robustness, ML: Classification and Regression

Abstract

Are Federated Learning (FL) systems free from backdoor poisoning with the arsenal of various defense strategies deployed? This is an intriguing problem with significant practical implications regarding the utility of FL services. Despite the recent flourish of poisoning-resilient FL methods, our study shows that carefully tuning the collusion between malicious participants can minimize the trigger-induced bias of the poisoned local model from the poison-free one, which plays the key role in delivering stealthy backdoor attacks and circumventing a wide spectrum of state-of-the-art defense methods in FL. In our work, we instantiate the attack strategy by proposing a distributed backdoor attack method, namely Cerberus Poisoning (CerP). It jointly tunes the backdoor trigger and controls the poisoned model changes on each malicious participant to achieve a stealthy yet successful backdoor attack against a wide spectrum of defensive mechanisms of federated learning techniques. Our extensive study on 3 large-scale benchmark datasets and 13 mainstream defensive mechanisms confirms that Cerberus Poisoning raises a significantly severe threat to the integrity and security of federated learning practices, regardless of the flourish of robust Federated Learning methods.

Downloads

Published

2023-06-26

How to Cite

Lyu, X., Han, Y., Wang, W., Liu, J., Wang, B., Liu, J., & Zhang, X. (2023). Poisoning with Cerberus: Stealthy and Colluded Backdoor Attack against Federated Learning. Proceedings of the AAAI Conference on Artificial Intelligence, 37(7), 9020-9028. https://doi.org/10.1609/aaai.v37i7.26083

Issue

Section

AAAI Technical Track on Machine Learning II