Delving into the Adversarial Robustness of Federated Learning
DOI:
https://doi.org/10.1609/aaai.v37i9.26331Keywords:
ML: Distributed Machine Learning & Federated LearningAbstract
In Federated Learning (FL), models are as fragile as centrally trained models against adversarial examples. However, the adversarial robustness of federated learning remains largely unexplored. This paper casts light on the challenge of adversarial robustness of federated learning. To facilitate a better understanding of the adversarial vulnerability of the existing FL methods, we conduct comprehensive robustness evaluations on various attacks and adversarial training methods. Moreover, we reveal the negative impacts induced by directly adopting adversarial training in FL, which seriously hurts the test accuracy, especially in non-IID settings. In this work, we propose a novel algorithm called Decision Boundary based Federated Adversarial Training (DBFAT), which consists of two components (local re-weighting and global regularization) to improve both accuracy and robustness of FL systems. Extensive experiments on multiple datasets demonstrate that DBFAT consistently outperforms other baselines under both IID and non-IID settings.Downloads
Published
2023-06-26
How to Cite
Zhang, J., Li, B., Chen, C., Lyu, L., Wu, S., Ding, S., & Wu, C. (2023). Delving into the Adversarial Robustness of Federated Learning. Proceedings of the AAAI Conference on Artificial Intelligence, 37(9), 11245-11253. https://doi.org/10.1609/aaai.v37i9.26331
Issue
Section
AAAI Technical Track on Machine Learning IV