D-SPIDER-SFO: A Decentralized Optimization Algorithm with Faster Convergence Rate for Nonconvex Problems

Authors

  • Taoxing Pan University of Science and Technology of China
  • Jun Liu Infinia ML, Inc.
  • Jie Wang University of Science and Technology of China

DOI:

https://doi.org/10.1609/aaai.v34i02.5523

Abstract

Decentralized optimization algorithms have attracted intensive interests recently, as it has a balanced communication pattern, especially when solving large-scale machine learning problems. Stochastic Path Integrated Differential Estimator Stochastic First-Order method (SPIDER-SFO) nearly achieves the algorithmic lower bound in certain regimes for nonconvex problems. However, whether we can find a decentralized algorithm which achieves a similar convergence rate to SPIDER-SFO is still unclear. To tackle this problem, we propose a decentralized variant of SPIDER-SFO, called decentralized SPIDER-SFO (D-SPIDER-SFO). We show that D-SPIDER-SFO achieves a similar gradient computation cost—that is, O−3) for finding an ϵ-approximate first-order stationary point—to its centralized counterpart. To the best of our knowledge, D-SPIDER-SFO achieves the state-of-the-art performance for solving nonconvex optimization problems on decentralized networks in terms of the computational cost. Experiments on different network configurations demonstrate the efficiency of the proposed method.

Downloads

Published

2020-04-03

How to Cite

Pan, T., Liu, J., & Wang, J. (2020). D-SPIDER-SFO: A Decentralized Optimization Algorithm with Faster Convergence Rate for Nonconvex Problems. Proceedings of the AAAI Conference on Artificial Intelligence, 34(02), 1619-1626. https://doi.org/10.1609/aaai.v34i02.5523

Issue

Section

AAAI Technical Track: Constraint Satisfaction and Optimization