Inexact Proximal Gradient Methods for Non-Convex and Non-Smooth Optimization

Authors

  • Bin Gu University of Pittsburgh
  • De Wang University of Texas at Arlington
  • Zhouyuan Huo University of Pittsburgh
  • Heng Huang University of Pittsburgh

DOI:

https://doi.org/10.1609/aaai.v32i1.11802

Keywords:

Inexact Proximal Gradient, Non-Convex and Non-Smooth Optimization

Abstract

In machine learning research, the proximal gradient methods are popular for solving various optimization problems with non-smooth regularization. Inexact proximal gradient methods are extremely important when exactly solving the proximal operator is time-consuming, or the proximal operator does not have an analytic solution. However, existing inexact proximal gradient methods only consider convex problems. The knowledge of inexact proximal gradient methods in the non-convex setting is very limited. To address this challenge, in this paper, we first propose three inexact proximal gradient algorithms, including the basic version and Nesterov’s accelerated version. After that, we provide the theoretical analysis to the basic and Nesterov’s accelerated versions. The theoretical results show that our inexact proximal gradient algorithms can have the same convergence rates as the ones of exact proximal gradient algorithms in the non-convex setting. Finally, we show the applications of our inexact proximal gradient algorithms on three representative non-convex learning problems. Empirical results confirm the superiority of our new inexact proximal gradient algorithms.

Downloads

Published

2018-04-29

How to Cite

Gu, B., Wang, D., Huo, Z., & Huang, H. (2018). Inexact Proximal Gradient Methods for Non-Convex and Non-Smooth Optimization. Proceedings of the AAAI Conference on Artificial Intelligence, 32(1). https://doi.org/10.1609/aaai.v32i1.11802