FedInv: Byzantine-Robust Federated Learning by Inversing Local Model Updates
Keywords:Machine Learning (ML)
AbstractFederated learning (FL) is a privacy-preserving distributed machine learning paradigm that enables multiple clients to collaboratively train statistical models without disclosing raw training data. However, the inaccessible local training data and uninspectable local training process make FL susceptible to various Byzantine attacks (e.g., data poisoning and model poisoning attacks), aiming to manipulate the FL model training process and degrade the model performance. Most of the existing Byzantine-robust FL schemes cannot effectively defend against stealthy poisoning attacks that craft poisoned models statistically similar to benign models. Things worsen when many clients are compromised or data among clients are highly non-independent and identically distributed (non-IID). In this work, to address these issues, we propose FedInv, a novel Byzantine-robust FL framework by inversing local model updates. Specifically, in each round of local model aggregation in FedInv, the parameter server first inverses the local model updates submitted by each client to generate a corresponding dummy dataset. Then, the server identifies those dummy datasets with exceptional Wasserstein distances from others and excludes the related local model updates from model aggregation. We conduct an exhaustive experimental evaluation of FedInv. The results demonstrate that FedInv significantly outperforms the existing robust FL schemes in defending against stealthy poisoning attacks under highly non-IID data partitions.
How to Cite
Zhao, B., Sun, P., Wang, T., & Jiang, K. (2022). FedInv: Byzantine-Robust Federated Learning by Inversing Local Model Updates. Proceedings of the AAAI Conference on Artificial Intelligence, 36(8), 9171-9179. https://doi.org/10.1609/aaai.v36i8.20903
AAAI Technical Track on Machine Learning III