Efficient Asynchronous Federated Learning with Prospective Momentum Aggregation and Fine-Grained Correction

Authors

  • Yu Zang Beijing University of Posts and Telecommunications
  • Zhe Xue Beijing University of Posts and Telecommunications
  • Shilong Ou Beijing University of Posts and Telecommunications
  • Lingyang Chu McMaster University
  • Junping Du Beijing University Of Posts And Telecommunications
  • Yunfei Long Beijing University of Posts and Telecommunications

DOI:

https://doi.org/10.1609/aaai.v38i15.29603

Keywords:

ML: Distributed Machine Learning & Federated Learning, ML: Classification and Regression

Abstract

Asynchronous federated learning (AFL) is a distributed machine learning technique that allows multiple devices to collaboratively train deep learning models without sharing local data. However, AFL suffers from low efficiency due to poor client model training quality and slow server model convergence speed, which are a result of the heterogeneous nature of both data and devices. To address these issues, we propose Efficient Asynchronous Federated Learning with Prospective Momentum Aggregation and Fine-Grained Correction (FedAC). Our framework consists of three key components. The first component is client weight evaluation based on temporal gradient, which evaluates the client weight based on the similarity between the client and server update directions. The second component is adaptive server update with prospective weighted momentum, which uses an asynchronous buffered update strategy and a prospective weighted momentum with adaptive learning rate to update the global model in server. The last component is client update with fine-grained gradient correction, which introduces a fine-grained gradient correction term to mitigate the client drift and correct the client stochastic gradient. We conduct experiments on real and synthetic datasets, and compare with existing federated learning methods. Experimental results demonstrate effective improvements in model training efficiency and AFL performance by our framework.

Published

2024-03-24

How to Cite

Zang, Y., Xue, Z., Ou, S., Chu, L., Du, J., & Long, Y. (2024). Efficient Asynchronous Federated Learning with Prospective Momentum Aggregation and Fine-Grained Correction. Proceedings of the AAAI Conference on Artificial Intelligence, 38(15), 16642-16650. https://doi.org/10.1609/aaai.v38i15.29603

Issue

Section

AAAI Technical Track on Machine Learning VI