Beyond Traditional Threats: A Persistent Backdoor Attack on Federated Learning

Authors

  • Tao Liu College of Computer Science and Technology, Harbin Engineering University, China
  • Yuhang Zhang College of Computer Science and Technology, Harbin Engineering University, China
  • Zhu Feng College of Computer Science and Technology, Harbin Engineering University, China
  • Zhiqin Yang Southampton Ocean Engineering Joint Institute, Harbin Engineering University, China
  • Chen Xu College of Computer Science and Technology, Harbin Engineering University, China
  • Dapeng Man College of Computer Science and Technology, Harbin Engineering University, China
  • Wu Yang College of Computer Science and Technology, Harbin Engineering University, China

DOI:

https://doi.org/10.1609/aaai.v38i19.30131

Keywords:

General

Abstract

Backdoors on federated learning will be diluted by subsequent benign updates. This is reflected in the significant reduction of attack success rate as iterations increase, ultimately failing. We use a new metric to quantify the degree of this weakened backdoor effect, called attack persistence. Given that research to improve this performance has not been widely noted, we propose a Full Combination Backdoor Attack (FCBA) method. It aggregates more combined trigger information for a more complete backdoor pattern in the global model. Trained backdoored global model is more resilient to benign updates, leading to a higher attack success rate on the test set. We test on three datasets and evaluate with two models across various settings. FCBA's persistence outperforms SOTA federated learning backdoor attacks. On GTSRB, post-attack 120 rounds, our attack success rate rose over 50% from baseline. The core code of our method is available at https://github.com/PhD-TaoLiu/FCBA.

Published

2024-03-24

How to Cite

Liu, T., Zhang, Y., Feng, Z., Yang, Z., Xu, C., Man, D., & Yang, W. (2024). Beyond Traditional Threats: A Persistent Backdoor Attack on Federated Learning. Proceedings of the AAAI Conference on Artificial Intelligence, 38(19), 21359-21367. https://doi.org/10.1609/aaai.v38i19.30131

Issue

Section

AAAI Technical Track on Safe, Robust and Responsible AI Track