TY - JOUR AU - Zhao, Bo AU - Sun, Peng AU - Wang, Tao AU - Jiang, Keyu PY - 2022/06/28 Y2 - 2024/03/28 TI - FedInv: Byzantine-Robust Federated Learning by Inversing Local Model Updates JF - Proceedings of the AAAI Conference on Artificial Intelligence JA - AAAI VL - 36 IS - 8 SE - AAAI Technical Track on Machine Learning III DO - 10.1609/aaai.v36i8.20903 UR - https://ojs.aaai.org/index.php/AAAI/article/view/20903 SP - 9171-9179 AB - Federated learning (FL) is a privacy-preserving distributed machine learning paradigm that enables multiple clients to collaboratively train statistical models without disclosing raw training data. However, the inaccessible local training data and uninspectable local training process make FL susceptible to various Byzantine attacks (e.g., data poisoning and model poisoning attacks), aiming to manipulate the FL model training process and degrade the model performance. Most of the existing Byzantine-robust FL schemes cannot effectively defend against stealthy poisoning attacks that craft poisoned models statistically similar to benign models. Things worsen when many clients are compromised or data among clients are highly non-independent and identically distributed (non-IID). In this work, to address these issues, we propose FedInv, a novel Byzantine-robust FL framework by inversing local model updates. Specifically, in each round of local model aggregation in FedInv, the parameter server first inverses the local model updates submitted by each client to generate a corresponding dummy dataset. Then, the server identifies those dummy datasets with exceptional Wasserstein distances from others and excludes the related local model updates from model aggregation. We conduct an exhaustive experimental evaluation of FedInv. The results demonstrate that FedInv significantly outperforms the existing robust FL schemes in defending against stealthy poisoning attacks under highly non-IID data partitions. ER -