A Huber Loss Minimization Approach to Byzantine Robust Federated Learning

Authors

  • Puning Zhao Zhejiang Lab
  • Fei Yu Zhejiang Lab
  • Zhiguo Wan Zhejiang Lab

DOI:

https://doi.org/10.1609/aaai.v38i19.30181

Keywords:

General

Abstract

Federated learning systems are susceptible to adversarial attacks. To combat this, we introduce a novel aggregator based on Huber loss minimization, and provide a comprehensive theoretical analysis. Under independent and identically distributed (i.i.d) assumption, our approach has several advantages compared to existing methods. Firstly, it has optimal dependence on epsilon, which stands for the ratio of attacked clients. Secondly, our approach does not need precise knowledge of epsilon. Thirdly, it allows different clients to have unequal data sizes. We then broaden our analysis to include non-i.i.d data, such that clients have slightly different distributions.

Published

2024-03-24

How to Cite

Zhao, P., Yu, F., & Wan, Z. (2024). A Huber Loss Minimization Approach to Byzantine Robust Federated Learning. Proceedings of the AAAI Conference on Artificial Intelligence, 38(19), 21806-21814. https://doi.org/10.1609/aaai.v38i19.30181

Issue

Section

AAAI Technical Track on Safe, Robust and Responsible AI Track