Iterative Regularization with k-support Norm: An Important Complement to Sparse Recovery

Authors

  • William de Vazelhes MBZUAI, Abu Dhabi, UAE
  • Bhaskar Mukhoty MBZUAI, Abu Dhabi, UAE
  • Xiao-Tong Yuan Nanjing University, Suzhou, China
  • Bin Gu MBZUAI, Abu Dhabi, UAE Jilin University, Changchun, China

DOI:

https://doi.org/10.1609/aaai.v38i10.29057

Keywords:

ML: Optimization, ML: Classification and Regression

Abstract

Sparse recovery is ubiquitous in machine learning and signal processing. Due to the NP-hard nature of sparse recovery, existing methods are known to suffer either from restrictive (or even unknown) applicability conditions, or high computational cost. Recently, iterative regularization methods have emerged as a promising fast approach because they can achieve sparse recovery in one pass through early stopping, rather than the tedious grid-search used in the traditional methods. However, most of those iterative methods are based on the l1 norm which requires restrictive applicability conditions and could fail in many cases. Therefore, achieving sparse recovery with iterative regularization methods under a wider range of conditions has yet to be further explored. To address this issue, we propose a novel iterative regularization algorithm, IRKSN, based on the k-support norm regularizer rather than the l1 norm. We provide conditions for sparse recovery with IRKSN, and compare them with traditional conditions for recovery with l1 norm regularizers. Additionally, we give an early stopping bound on the model error of IRKSN with explicit constants, achieving the standard linear rate for sparse recovery. Finally, we illustrate the applicability of our algorithm on several experiments, including a support recovery experiment with a correlated design matrix.

Published

2024-03-24

How to Cite

de Vazelhes, W., Mukhoty, B., Yuan, X.-T., & Gu, B. (2024). Iterative Regularization with k-support Norm: An Important Complement to Sparse Recovery. Proceedings of the AAAI Conference on Artificial Intelligence, 38(10), 11731-11739. https://doi.org/10.1609/aaai.v38i10.29057

Issue

Section

AAAI Technical Track on Machine Learning I