Provably Secure Federated Learning against Malicious Clients

Authors

  • Xiaoyu Cao Duke University
  • Jinyuan Jia Duke University
  • Neil Zhenqiang Gong Duke University

Keywords:

Distributed Machine Learning & Federated Learning, Adversarial Learning & Robustness, Ensemble Methods

Abstract

Federated learning enables clients to collaboratively learn a shared global model without sharing their local training data with a cloud server. However, malicious clients can corrupt the global model to predict incorrect labels for testing examples. Existing defenses against malicious clients leverage Byzantine-robust federated learning methods. However, these methods cannot provably guarantee that the predicted label for a testing example is not affected by malicious clients. We bridge this gap via ensemble federated learning. In particular, given any base federated learning algorithm, we use the algorithm to learn multiple global models, each of which is learnt using a randomly selected subset of clients. When predicting the label of a testing example, we take majority vote among the global models. We show that our ensemble federated learning with any base federated learning algorithm is provably secure against malicious clients. Specifically, the label predicted by our ensemble global model for a testing example is provably not affected by a bounded number of malicious clients. Moreover, we show that our derived bound is tight. We evaluate our method on MNIST and Human Activity Recognition datasets. For instance, our method can achieve a certified accuracy of 88% on MNIST when 20 out of 1,000 clients are malicious.

Downloads

Published

2021-05-18

How to Cite

Cao, X., Jia, J., & Gong, N. Z. (2021). Provably Secure Federated Learning against Malicious Clients. Proceedings of the AAAI Conference on Artificial Intelligence, 35(8), 6885-6893. Retrieved from https://ojs.aaai.org/index.php/AAAI/article/view/16849

Issue

Section

AAAI Technical Track on Machine Learning I