FedCC: Federated Learning with Consensus Confirmation for Byzantine Attack Resistance (Student Abstract)

Authors

  • Woocheol Kim Gwangju Institute of Science and Technology (GIST)
  • Hyuk Lim Gwangju Institute of Science and Technology (GIST)

DOI:

https://doi.org/10.1609/aaai.v36i11.21627

Keywords:

Federated Learning, Byzantine Attack, Byzantine-robust Federated Learning

Abstract

In federated learning (FL), a server determines a global learning model by aggregating the local learning models of clients, and the determined global model is broadcast to all the clients. However, the global learning model can significantly deteriorate if a Byzantine attacker transmits malicious learning models trained with incorrectly labeled data. We propose a Byzantine-robust FL algorithm that, by employing a consensus confirmation method, can reduce the success probability of Byzantine attacks. After aggregating the local models from clients, the proposed FL server validates the global model candidate by sending the global model candidate to a set of randomly selected FL clients and asking them to perform local validation with their local data. If most of the validation is positive, the global model is confirmed and broadcast to all the clients. We compare the performance of the proposed FL against Byzantine attacks with that of existing FL algorithms analytically and empirically.

Downloads

Published

2022-06-28

How to Cite

Kim, W., & Lim, H. (2022). FedCC: Federated Learning with Consensus Confirmation for Byzantine Attack Resistance (Student Abstract). Proceedings of the AAAI Conference on Artificial Intelligence, 36(11), 12981-12982. https://doi.org/10.1609/aaai.v36i11.21627