Layer-Wise Adaptive Model Aggregation for Scalable Federated Learning

Authors

  • Sunwoo Lee University of Southern California Inha University
  • Tuo Zhang University of Southern California
  • A. Salman Avestimehr University of Southern California

DOI:

https://doi.org/10.1609/aaai.v37i7.26023

Keywords:

ML: Distributed Machine Learning & Federated Learning, ML: Scalability of ML Systems

Abstract

In Federated Learning (FL), a common approach for aggregating local solutions across clients is periodic full model averaging. It is, however, known that different layers of neural networks can have a different degree of model discrepancy across the clients. The conventional full aggregation scheme does not consider such a difference and synchronizes the whole model parameters at once, resulting in inefficient network bandwidth consumption. Aggregating the parameters that are similar across the clients does not make meaningful training progress while increasing the communication cost. We propose FedLAMA, a layer-wise adaptive model aggregation scheme for scalable FL. FedLAMA adjusts the aggregation interval in a layer-wise manner, jointly considering the model discrepancy and the communication cost. This fine-grained aggregation strategy enables to reduce the communication cost without significantly harming the model accuracy. Our extensive empirical study shows that, as the aggregation interval increases, FedLAMA shows a remarkably smaller accuracy drop than the periodic full aggregation, while achieving comparable communication efficiency.

Downloads

Published

2023-06-26

How to Cite

Lee, S., Zhang, T., & Avestimehr, A. S. (2023). Layer-Wise Adaptive Model Aggregation for Scalable Federated Learning. Proceedings of the AAAI Conference on Artificial Intelligence, 37(7), 8491-8499. https://doi.org/10.1609/aaai.v37i7.26023

Issue

Section

AAAI Technical Track on Machine Learning II