Federated Dynamic Sparse Training: Computing Less, Communicating Less, Yet Learning Better

Authors

  • Sameer Bibikar The University of Texas at Austin
  • Haris Vikalo The University of Texas at Austin
  • Zhangyang Wang The University of Texas at Austin
  • Xiaohan Chen The University of Texas at Austin

DOI:

https://doi.org/10.1609/aaai.v36i6.20555

Keywords:

Machine Learning (ML)

Abstract

Federated learning (FL) enables distribution of machine learning workloads from the cloud to resource-limited edge devices. Unfortunately, current deep networks remain not only too compute-heavy for inference and training on edge devices, but also too large for communicating updates over bandwidth-constrained networks. In this paper, we develop, implement, and experimentally validate a novel FL framework termed Federated Dynamic Sparse Training (FedDST) by which complex neural networks can be deployed and trained with substantially improved efficiency in both on-device computation and in-network communication. At the core of FedDST is a dynamic process that extracts and trains sparse sub-networks from the target full network. With this scheme, "two birds are killed with one stone:'' instead of full models, each client performs efficient training of its own sparse networks, and only sparse networks are transmitted between devices and the cloud. Furthermore, our results reveal that the dynamic sparsity during FL training more flexibly accommodates local heterogeneity in FL agents than the fixed, shared sparse masks. Moreover, dynamic sparsity naturally introduces an "in-time self-ensembling effect'' into the training dynamics, and improves the FL performance even over dense training. In a realistic and challenging non i.i.d. FL setting, FedDST consistently outperforms competing algorithms in our experiments: for instance, at any fixed upload data cap on non-iid CIFAR-10, it gains an impressive accuracy advantage of 10% over FedAvgM when given the same upload data cap; the accuracy gap remains 3% even when FedAvgM is given 2 times the upload data cap, further demonstrating efficacy of FedDST. Code is available at: https://github.com/bibikar/feddst.

Downloads

Published

2022-06-28

How to Cite

Bibikar, S., Vikalo, H., Wang, Z., & Chen, X. (2022). Federated Dynamic Sparse Training: Computing Less, Communicating Less, Yet Learning Better. Proceedings of the AAAI Conference on Artificial Intelligence, 36(6), 6080-6088. https://doi.org/10.1609/aaai.v36i6.20555

Issue

Section

AAAI Technical Track on Machine Learning I