Learned Extragradient ISTA with Interpretable Residual Structures for Sparse Coding
DOI:
https://doi.org/10.1609/aaai.v35i10.17032Keywords:
Representation Learning, OptimizationAbstract
Recently, the study on learned iterative shrinkage thresholding algorithm (LISTA) has attracted increasing attentions. A large number of experiments as well as some theories have proved the high efficiency of LISTA for solving sparse coding problems. However, existing LISTA methods are all serial connection. To address this issue, we propose a novel extragradient based LISTA (ELISTA), which has a residual structure and theoretical guarantees. Moreover, most LISTA methods use the soft thresholding function, which has been found to cause a large estimation bias. Therefore, we propose a thresholding function for ELISTA instead of soft thresholding. From a theoretical perspective, we prove that our method attains linear convergence. Through ablation experiments, the improvements of our method on the network structure and the thresholding function are verified in practice. Extensive empirical results verify the advantages of our method.Downloads
Published
2021-05-18
How to Cite
Li, Y., Kong, L., Shang, F., Liu, Y., Liu, H., & Lin, Z. (2021). Learned Extragradient ISTA with Interpretable Residual Structures for Sparse Coding. Proceedings of the AAAI Conference on Artificial Intelligence, 35(10), 8501-8509. https://doi.org/10.1609/aaai.v35i10.17032
Issue
Section
AAAI Technical Track on Machine Learning III