Learned Extragradient ISTA with Interpretable Residual Structures for Sparse Coding
Keywords:Representation Learning, Optimization
AbstractRecently, the study on learned iterative shrinkage thresholding algorithm (LISTA) has attracted increasing attentions. A large number of experiments as well as some theories have proved the high efficiency of LISTA for solving sparse coding problems. However, existing LISTA methods are all serial connection. To address this issue, we propose a novel extragradient based LISTA (ELISTA), which has a residual structure and theoretical guarantees. Moreover, most LISTA methods use the soft thresholding function, which has been found to cause a large estimation bias. Therefore, we propose a thresholding function for ELISTA instead of soft thresholding. From a theoretical perspective, we prove that our method attains linear convergence. Through ablation experiments, the improvements of our method on the network structure and the thresholding function are verified in practice. Extensive empirical results verify the advantages of our method.
How to Cite
Li, Y., Kong, L., Shang, F., Liu, Y., Liu, H., & Lin, Z. (2021). Learned Extragradient ISTA with Interpretable Residual Structures for Sparse Coding. Proceedings of the AAAI Conference on Artificial Intelligence, 35(10), 8501-8509. Retrieved from https://ojs.aaai.org/index.php/AAAI/article/view/17032
AAAI Technical Track on Machine Learning III