A Feasible Nonconvex Relaxation Approach to Feature Selection

Authors

  • Cuixia Gao Zhejiang University
  • Naiyan Wang Zhejiang University
  • Qi Yu Zhejiang University
  • Zhihua Zhang Zhejiang University

DOI:

https://doi.org/10.1609/aaai.v25i1.7921

Abstract

Variable selection problems are typically addressed under apenalized optimization framework. Nonconvex penalties such as the minimax concave plus (MCP) and smoothly clipped absolute deviation(SCAD), have been demonstrated to have the properties of sparsity practically and theoretically. In this paper we propose a new nonconvex penalty that we call exponential-type penalty. The exponential-type penalty is characterized by a positive parameter,which establishes a connection with the ell0 and ell1 penalties.We apply this new penalty to sparse supervised learning problems. To solve to resulting optimization problem, we resort to a reweighted ell1 minimization method. Moreover, we devise an efficient method for the adaptive update of the tuning parameter. Our experimental results are encouraging. They show that the exponential-type penalty is competitive with MCP and SCAD.

Downloads

Published

2011-08-04

How to Cite

Gao, C., Wang, N., Yu, Q., & Zhang, Z. (2011). A Feasible Nonconvex Relaxation Approach to Feature Selection. Proceedings of the AAAI Conference on Artificial Intelligence, 25(1), 356-361. https://doi.org/10.1609/aaai.v25i1.7921

Issue

Section

AAAI Technical Track: Machine Learning