Lock-Free Optimization for Non-Convex Problems

Authors

  • Shen-Yi Zhao Nanjing University
  • Gong-Duo Zhang Nanjing University
  • Wu-Jun Li Nanjing University

DOI:

https://doi.org/10.1609/aaai.v31i1.10921

Abstract

Stochastic gradient descent (SGD) and its variants have attracted much attention in machine learning due to their efficiency and effectiveness for optimization. To handle large-scale problems, researchers have recently proposed several lock-free strategy based parallel SGD (LF-PSGD) methods for multi-core systems. However, existing works have only proved the convergence of these LF-PSGD methods for convex problems. To the best of our knowledge, no work has proved the convergence of the LF-PSGD methods for non-convex problems. In this paper, we provide the theoretical proof about the convergence of two representative LF-PSGD methods, Hogwild! and AsySVRG, for non-convex problems. Empirical results also show that both Hogwild! and AsySVRG are convergent on non-convex problems, which successfully verifies our theoretical results.

Downloads

Published

2017-02-13

How to Cite

Zhao, S.-Y., Zhang, G.-D., & Li, W.-J. (2017). Lock-Free Optimization for Non-Convex Problems. Proceedings of the AAAI Conference on Artificial Intelligence, 31(1). https://doi.org/10.1609/aaai.v31i1.10921