TY - JOUR AU - Bibikar, Sameer AU - Vikalo, Haris AU - Wang, Zhangyang AU - Chen, Xiaohan PY - 2022/06/28 Y2 - 2024/03/29 TI - Federated Dynamic Sparse Training: Computing Less, Communicating Less, Yet Learning Better JF - Proceedings of the AAAI Conference on Artificial Intelligence JA - AAAI VL - 36 IS - 6 SE - AAAI Technical Track on Machine Learning I DO - 10.1609/aaai.v36i6.20555 UR - https://ojs.aaai.org/index.php/AAAI/article/view/20555 SP - 6080-6088 AB - Federated learning (FL) enables distribution of machine learning workloads from the cloud to resource-limited edge devices. Unfortunately, current deep networks remain not only too compute-heavy for inference and training on edge devices, but also too large for communicating updates over bandwidth-constrained networks. In this paper, we develop, implement, and experimentally validate a novel FL framework termed Federated Dynamic Sparse Training (FedDST) by which complex neural networks can be deployed and trained with substantially improved efficiency in both on-device computation and in-network communication. At the core of FedDST is a dynamic process that extracts and trains sparse sub-networks from the target full network. With this scheme, "two birds are killed with one stone:'' instead of full models, each client performs efficient training of its own sparse networks, and only sparse networks are transmitted between devices and the cloud. Furthermore, our results reveal that the dynamic sparsity during FL training more flexibly accommodates local heterogeneity in FL agents than the fixed, shared sparse masks. Moreover, dynamic sparsity naturally introduces an "in-time self-ensembling effect'' into the training dynamics, and improves the FL performance even over dense training. In a realistic and challenging non i.i.d. FL setting, FedDST consistently outperforms competing algorithms in our experiments: for instance, at any fixed upload data cap on non-iid CIFAR-10, it gains an impressive accuracy advantage of 10% over FedAvgM when given the same upload data cap; the accuracy gap remains 3% even when FedAvgM is given 2 times the upload data cap, further demonstrating efficacy of FedDST. Code is available at: https://github.com/bibikar/feddst. ER -