A General Efficient Hyperparameter-Free Algorithm for Convolutional Sparse Learning

Authors

  • Zheng Xu The University of Texas at Arlington
  • Junzhou Huang The University of Texas at Arlington

DOI:

https://doi.org/10.1609/aaai.v31i1.10880

Keywords:

sparse learning, convolutional sparse learning, primal-dual algorithm, hyperparameter-free algorithm, structured sparse learning

Abstract

Structured sparse learning has become a popular and mature research field. Among all structured sparse models, we found an interesting fact that most structured sparse properties could be captured by convolution operators, most famous ones being total variation and wavelet sparsity. This finding has naturally brought us to a generalization termed as Convolutional Sparsity. While this generalization bridges the convolution and sparse learning theory, we are able to propose a general, efficient, hyperparameter-free optimization algorithm framework for convolutional sparse models, thanks to the analysis theory of convolution operators. The convergence of the general, hyperparameter-free algorithm has been comprehensively analyzed, with a non-ergodic rate of O(1/ϵ2) and ergodic rate of O(1/ϵ), where ϵ is the desired accuracy. Extensive experiments confirm the superior performance of our general algorithm in various convolutional sparse models, even better than some application-specialistic algorithms.

Downloads

Published

2017-02-13

How to Cite

Xu, Z., & Huang, J. (2017). A General Efficient Hyperparameter-Free Algorithm for Convolutional Sparse Learning. Proceedings of the AAAI Conference on Artificial Intelligence, 31(1). https://doi.org/10.1609/aaai.v31i1.10880