Decentralized Sum-of-Nonconvex Optimization

Authors

  • Zhuanghua Liu National University of Singapore CNRS@CREATE LTD, 1 Create Way, #08-01 CREATE Tower, Singapore 138602
  • Bryan Kian Hsiang Low National University of Singapore

DOI:

https://doi.org/10.1609/aaai.v38i13.29318

Keywords:

ML: Optimization

Abstract

We consider the optimization problem of minimizing the sum-of-nonconvex function, i.e., a convex function that is the average of nonconvex components. The existing stochastic algorithms for such a problem only focus on a single machine and the centralized scenario. In this paper, we study the sum-of-nonconvex optimization in the decentralized setting. We present a new theoretical analysis of the PMGT-SVRG algorithm for this problem and prove the linear convergence of their approach. However, the convergence rate of the PMGT-SVRG algorithm has a linear dependency on the condition number, which is undesirable for the ill-conditioned problem. To remedy this issue, we propose an accelerated stochastic decentralized first-order algorithm by incorporating the techniques of acceleration, gradient tracking, and multi-consensus mixing into the SVRG algorithm. The convergence rate of the proposed method has a square-root dependency on the condition number. The numerical experiments validate the theoretical guarantee of our proposed algorithms on both synthetic and real-world datasets.

Downloads

Published

2024-03-24

How to Cite

Liu, Z., & Low, B. K. H. (2024). Decentralized Sum-of-Nonconvex Optimization. Proceedings of the AAAI Conference on Artificial Intelligence, 38(13), 14088-14096. https://doi.org/10.1609/aaai.v38i13.29318

Issue

Section

AAAI Technical Track on Machine Learning IV