Federated Learning with Extremely Noisy Clients via Negative Distillation

Authors

  • Yang Lu Xiamen University
  • Lin Chen Xiamen University
  • Yonggang Zhang Hong Kong Baptist University
  • Yiliang Zhang Xiamen University
  • Bo Han Hong Kong Baptist University
  • Yiu-ming Cheung Hong Kong Baptist University
  • Hanzi Wang Xiamen University

DOI:

https://doi.org/10.1609/aaai.v38i13.29329

Keywords:

ML: Distributed Machine Learning & Federated Learning, ML: Deep Learning Algorithms, PEAI: Safety, Robustness & Trustworthiness

Abstract

Federated learning (FL) has shown remarkable success in cooperatively training deep models, while typically struggling with noisy labels. Advanced works propose to tackle label noise by a re-weighting strategy with a strong assumption, i.e., mild label noise. However, it may be violated in many real-world FL scenarios because of highly contaminated clients, resulting in extreme noise ratios, e.g., >90%. To tackle extremely noisy clients, we study the robustness of the re-weighting strategy, showing a pessimistic conclusion: minimizing the weight of clients trained over noisy data outperforms re-weighting strategies. To leverage models trained on noisy clients, we propose a novel approach, called negative distillation (FedNed). FedNed first identifies noisy clients and employs rather than discards the noisy clients in a knowledge distillation manner. In particular, clients identified as noisy ones are required to train models using noisy labels and pseudo-labels obtained by global models. The model trained on noisy labels serves as a ‘bad teacher’ in knowledge distillation, aiming to decrease the risk of providing incorrect information. Meanwhile, the model trained on pseudo-labels is involved in model aggregation if not identified as a noisy client. Consequently, through pseudo-labeling, FedNed gradually increases the trustworthiness of models trained on noisy clients, while leveraging all clients for model aggregation through negative distillation. To verify the efficacy of FedNed, we conduct extensive experiments under various settings, demonstrating that FedNed can consistently outperform baselines and achieve state-of-the-art performance.

Published

2024-03-24

How to Cite

Lu, Y., Chen, L., Zhang, Y., Zhang, Y., Han, B., Cheung, Y.- ming, & Wang, H. (2024). Federated Learning with Extremely Noisy Clients via Negative Distillation. Proceedings of the AAAI Conference on Artificial Intelligence, 38(13), 14184-14192. https://doi.org/10.1609/aaai.v38i13.29329

Issue

Section

AAAI Technical Track on Machine Learning IV