Sparse Learning for Stochastic Composite Optimization

Authors

  • Weizhong Zhang Zhejiang University
  • Lijun Zhang Michigan State University
  • Yao Hu Zhejiang University
  • Rong Jin Michigan State University
  • Deng Cai Zhejiang University
  • Xiaofei He Zhejiang University

DOI:

https://doi.org/10.1609/aaai.v28i1.8844

Keywords:

Stochastic Optimization, Online Learning, Composite Gradient Mapping, Stochastic Gradient Descent

Abstract

In this paper, we focus on Stochastic Composite Optimization (SCO) for sparse learning that aims to learn a sparse solution. Although many SCO algorithms have been developed for sparse learning with an optimal convergence rate $O(1/T)$, they often fail to deliver sparse solutions at the end either because of the limited sparsity regularization during stochastic optimization or due to the limitation in online-to-batch conversion. To improve the sparsity of solutions obtained by SCO, we propose a simple but effective stochastic optimization scheme that adds a novel sparse online-to-batch conversion to the traditional SCO algorithms. The theoretical analysis shows that our scheme can find a solution with better sparse patterns without affecting the convergence rate. Experimental results on both synthetic and real-world data sets show that the proposed methods are more effective in recovering the sparse solution and have comparable convergence rate as the state-of-the-art SCO algorithms for sparse learning.

Downloads

Published

2014-06-21

How to Cite

Zhang, W., Zhang, L., Hu, Y., Jin, R., Cai, D., & He, X. (2014). Sparse Learning for Stochastic Composite Optimization. Proceedings of the AAAI Conference on Artificial Intelligence, 28(1). https://doi.org/10.1609/aaai.v28i1.8844

Issue

Section

AAAI Technical Track: Heuristic Search and Optimization