Delay-Adaptive Distributed Stochastic Optimization


  • Zhaolin Ren Harvard University
  • Zhengyuan Zhou New York University
  • Linhai Qiu Google Inc.
  • Ajay Deshpande IBM Research
  • Jayant Kalagnanam IBM Research



In large-scale optimization problems, distributed asynchronous stochastic gradient descent (DASGD) is a commonly used algorithm. In most applications, there are often a large number of computing nodes asynchronously computing gradient information. As such, the gradient information received at a given iteration is often stale. In the presence of such delays, which can be unbounded, the convergence of DASGD is uncertain. The contribution of this paper is twofold. First, we propose a delay-adaptive variant of DASGD where we adjust each iteration's step-size based on the size of the delay, and prove asymptotic convergence of the algorithm on variationally coherent stochastic problems, a class of functions which properly includes convex, quasi-convex and star-convex functions. Second, we extend the convergence results of standard DASGD, used usually for problems with bounded domains, to problems with unbounded domains. In this way, we extend the frontier of theoretical guarantees for distributed asynchronous optimization, and provide new insights for practitioners working on large-scale optimization problems.




How to Cite

Ren, Z., Zhou, Z., Qiu, L., Deshpande, A., & Kalagnanam, J. (2020). Delay-Adaptive Distributed Stochastic Optimization. Proceedings of the AAAI Conference on Artificial Intelligence, 34(04), 5503-5510.



AAAI Technical Track: Machine Learning