SADBA: Self-Adaptive Distributed Backdoor Attack Against Federated Learning
DOI:
https://doi.org/10.1609/aaai.v39i16.33820Abstract
Backdoor attacks in federated learning (FL) face challenges such as lower attack success rates and compromised main task accuracy (MA) compared to local training. Existing methods like distributed backdoor attack (DBA) mitigate these issues by modifying malicious clients’ updates and partitioning global triggers to enhance backdoor persistence and stealth. The recent full combination backdoor attack (FCBA) further improves backdoor efficiency with a full combination strategy. However, these methods are mainly applicable in small-scale FL. In large-scale FL, small trigger patterns weaken impact, and scaling them requires controlling exponentially more clients, which poses significant challenges, while simply reverting to DBA may decrease backdoor performance. To overcome these challenges, we propose the self-adaptive distributed backdoor attack (SADBA), which achieves similar performance to FCBA with a lower percentage of malicious clients (PMC). It also adapts more flexibly through an optimized model poisoning strategy and a self-adaptive data poisoning strategy. Experiments demonstrate SADBA outperforms state-of-the-art methods, achieving higher or comparable backdoor performance and MA across various datasets with limited PMC.Downloads
Published
2025-04-11
How to Cite
Feng, J., Lai, Y., Sun, H., & Ren, B. (2025). SADBA: Self-Adaptive Distributed Backdoor Attack Against Federated Learning. Proceedings of the AAAI Conference on Artificial Intelligence, 39(16), 16568–16576. https://doi.org/10.1609/aaai.v39i16.33820
Issue
Section
AAAI Technical Track on Machine Learning II