Poisoning with a Pill: Circumventing Detection in Federated Learning
DOI:
https://doi.org/10.1609/aaai.v40i26.39290Abstract
Federated learning (FL) protects data privacy by enabling distributed model training without direct access to client data. However, its distributed nature makes it vulnerable to model and data poisoning attacks. While numerous defenses filter malicious clients using statistical metrics, they overlook the role of model redundancy, where not all parameters contribute equally to the model and attack performance. Current attacks manipulate all model parameters uniformly, making them more detectable, while defenses focus on the overall statistics of client updates, leaving gaps for more sophisticated attacks. We propose an attack-agnostic augmentation method to enhance the stealthiness and effectiveness of existing poisoning attacks in FL, exposing flaws in current defenses and highlighting the need for fine-grained FL security. Our three-stage methodology, including pill construction, pill poisoning, and pill injection, injects poison into a compact subnet (i.e., pill) of the global model during the iterative FL training. Experimental results show that FL poisoning attacks enhanced by our method can bypass 8 state-of-the-art (SOTA) defenses, gaining an up to 7x error rate increase, as well as on average a more than 2x error rate increase on both IID and non-IID data, in both cross-silo and cross-device FL systems.Published
2026-03-14
How to Cite
Guo, H., Wang, H., Song, T., Zheng, T., Hua, Y., Guan, H., & Zhang, X. (2026). Poisoning with a Pill: Circumventing Detection in Federated Learning. Proceedings of the AAAI Conference on Artificial Intelligence, 40(26), 21432–21440. https://doi.org/10.1609/aaai.v40i26.39290
Issue
Section
AAAI Technical Track on Machine Learning III