A Provable Approach for Double-Sparse Coding

Authors

  • Thanh Nguyen Iowa State University
  • Raymond Wong Texas A&M University
  • Chinmay Hegde Iowa State University

DOI:

https://doi.org/10.1609/aaai.v32i1.11654

Keywords:

Sparse Coding, Dictionary Learning, Double-Sparse Coding, Feature Construction,

Abstract

Sparse coding is a crucial subroutine in algorithms for various signal processing, deep learning, and other machine learning applications. The central goal is to learn an overcomplete dictionary that can sparsely represent a given dataset. However, storage, transmission, and processing of the learned dictionary can be untenably high if the data dimension is high. In this paper, we consider the double-sparsity model introduced by Rubinstein, Zibulevsky, and Elad (2010) where the dictionary itself is the product of a fixed, known basis and a data-adaptive sparse component. First, we introduce a simple algorithm for double-sparse coding that can be amenable to efficient implementation via neural architectures. Second, we theoretically analyze its performance and demonstrate asymptotic sample complexity and running time benefits over existing (provable) approaches for sparse coding. To our knowledge, our work introduces the first computationally efficient algorithm for double-sparse coding that enjoys rigorous statistical guarantees. Finally, we support our analysis via several numerical experiments on simulated data, confirming that our method can indeed be useful in problem sizes encountered in practical applications.

Downloads

Published

2018-04-29

How to Cite

Nguyen, T., Wong, R., & Hegde, C. (2018). A Provable Approach for Double-Sparse Coding. Proceedings of the AAAI Conference on Artificial Intelligence, 32(1). https://doi.org/10.1609/aaai.v32i1.11654