WHALE-FL: Wireless and Heterogeneity Aware Latency Efficient Federated Learning over Mobile Devices via Adaptive Subnetwork Scheduling

Authors

  • Huai-An Su Department of Electrical and Computer Engineering, University of Houston
  • Jiaxiang Geng School of Information and Communication Engineering, Beijing University of Posts and Telecommunications
  • Liang Li Frontier Research Center, Peng Cheng Laboratory
  • Xiaoqi Qin School of Information and Communication Engineering, Beijing University of Posts and Telecommunications
  • Yanzhao Hou School of Information and Communication Engineering, Beijing University of Posts and Telecommunications
  • Hao Wang Department of Electrical and Computer Engineering, Stevens Institute of Technology
  • Xin Fu Department of Electrical and Computer Engineering, University of Houston
  • Miao Pan Department of Electrical and Computer Engineering, University of Houston

DOI:

https://doi.org/10.1609/aaai.v39i19.34272

Abstract

As a popular distributed learning paradigm, federated learning (FL) over mobile devices fosters numerous applications, while their practical deployment is hindered by participating devices' computing and communication heterogeneity. Some pioneering research efforts proposed to extract subnetworks from the global model, and assign as large a subnetwork as possible to the device for local training based on its full computing capacity. Although such fixed size subnetwork assignment enables FL training over heterogeneous mobile devices, it is unaware of (i) the dynamic changes of devices' communication and computing conditions and (ii) FL training progress and its dynamic requirements of local training contributions, both of which may cause very long FL training delay. Motivated by those dynamics, in this paper, we develop a wireless and heterogeneity aware latency efficient FL (WHALE-FL) approach to accelerate FL training through adaptive subnetwork scheduling. Instead of sticking to the fixed size subnetwork, WHALE-FL introduces a novel subnetwork selection utility function to capture device and FL training dynamics, and guides the mobile device to adaptively select the subnetwork size for local training based on (a) its computing and communication capacity, (b) its dynamic computing and/or communication conditions, and (c) FL training status and its corresponding requirements for local training contributions. Our evaluation shows that, compared with peer designs, WHALE-FL effectively accelerates FL training without sacrificing learning accuracy.

Downloads

Published

2025-04-11

How to Cite

Su, H.-A., Geng, J., Li, L., Qin, X., Hou, Y., Wang, H., … Pan, M. (2025). WHALE-FL: Wireless and Heterogeneity Aware Latency Efficient Federated Learning over Mobile Devices via Adaptive Subnetwork Scheduling. Proceedings of the AAAI Conference on Artificial Intelligence, 39(19), 20619–20627. https://doi.org/10.1609/aaai.v39i19.34272

Issue

Section

AAAI Technical Track on Machine Learning V