Step-Ahead Error Feedback for Distributed Training with Compressed Gradient

Authors

  • An Xu University of Pittsburgh
  • Zhouyuan Huo Google
  • Heng Huang University of Pittsburgh JD Finance America Corporation

DOI:

https://doi.org/10.1609/aaai.v35i12.17254

Keywords:

Scalability of ML Systems, Distributed Machine Learning & Federated Learning

Abstract

Although the distributed machine learning methods can speed up the training of large deep neural networks, the communication cost has become the non-negligible bottleneck to constrain the performance. To address this challenge, the gradient compression based communication-efficient distributed learning methods were designed to reduce the communication cost, and more recently the local error feedback was incorporated to compensate for the corresponding performance loss. However, in this paper, we will show that a new "gradient mismatch" problem is raised by the local error feedback in centralized distributed training and can lead to degraded performance compared with full-precision training. To solve this critical problem, we propose two novel techniques, 1) step ahead and 2) error averaging, with rigorous theoretical analysis. Both our theoretical and empirical results show that our new methods can handle the "gradient mismatch" problem. The experimental results show that we can even train faster with common gradient compression schemes than both the full-precision training and local error feedback regarding the training epochs and without performance loss.

Downloads

Published

2021-05-18

How to Cite

Xu, A., Huo, Z., & Huang, H. (2021). Step-Ahead Error Feedback for Distributed Training with Compressed Gradient. Proceedings of the AAAI Conference on Artificial Intelligence, 35(12), 10478-10486. https://doi.org/10.1609/aaai.v35i12.17254

Issue

Section

AAAI Technical Track on Machine Learning V