Gradient Descent with Proximal Average for Nonconvex and Composite Regularization

Authors

  • Wenliang Zhong Hong Kong University of Science and Technology
  • James Kwok Hong Kong University of Science and Technology

DOI:

https://doi.org/10.1609/aaai.v28i1.8994

Keywords:

nonconvex optimization, composite regularization

Abstract

Sparse modeling has been highly successful in many real-world applications. While a lot of interests have been on convex regularization, recent studies show that nonconvexregularizers can outperform their convex counterparts in many situations.However, the resulting nonconvex optimization problems are often challenging, especiallyfor composite regularizers such as the nonconvex overlapping group lasso. In thispaper, byusing a recent mathematical tool known as the proximal average,we propose a novel proximal gradient descent method for optimization with a wide class of nonconvex and composite regularizers.Instead of directlysolving the proximal stepassociated with a composite regularizer, we average thesolutions from the proximal problems of the constituent regularizers. This simple strategy has guaranteed convergenceand low per-iteration complexity.Experimental results on a number of synthetic andreal-world data sets demonstrate the effectiveness and efficiency of theproposed optimization algorithm, and also the improved classification performanceresulting from thenonconvex regularizers.

Downloads

Published

2014-06-21

How to Cite

Zhong, W., & Kwok, J. (2014). Gradient Descent with Proximal Average for Nonconvex and Composite Regularization. Proceedings of the AAAI Conference on Artificial Intelligence, 28(1). https://doi.org/10.1609/aaai.v28i1.8994

Issue

Section

Main Track: Novel Machine Learning Algorithms