To Avoid the Pitfall of Missing Labels in Feature Selection: A Generative Model Gives the Answer

Authors

  • Yuanyuan Xu Nankai University
  • Jun Wang Ludong University
  • Jinmao Wei Nankai University

DOI:

https://doi.org/10.1609/aaai.v34i04.6127

Abstract

In multi-label learning, instances have a large number of noisy and irrelevant features, and each instance is associated with a set of class labels wherein label information is generally incomplete. These missing labels possess two sides like a coin; people cannot predict whether their provided information for feature selection is favorable (relevant) or not (irrelevant) during tossing. Existing approaches either superficially consider the missing labels as negative or indiscreetly impute them with some predicted values, which may either overestimate unobserved labels or introduce new noises in selecting discriminative features. To avoid the pitfall of missing labels, a novel unified framework of selecting discriminative features and modeling incomplete label matrix is proposed from a generative point of view in this paper. Concretely, we relax Smoothness Assumption to infer the label observability, which can reveal the positions of unobserved labels, and employ the spike-and-slab prior to perform feature selection by excluding unobserved labels. Using a data-augmentation strategy leads to full local conjugacy in our model, facilitating simple and efficient Expectation Maximization (EM) algorithm for inference. Quantitative and qualitative experimental results demonstrate the superiority of the proposed approach under various evaluation metrics.

Downloads

Published

2020-04-03

How to Cite

Xu, Y., Wang, J., & Wei, J. (2020). To Avoid the Pitfall of Missing Labels in Feature Selection: A Generative Model Gives the Answer. Proceedings of the AAAI Conference on Artificial Intelligence, 34(04), 6534-6541. https://doi.org/10.1609/aaai.v34i04.6127

Issue

Section

AAAI Technical Track: Machine Learning