Partial Multi-Label Learning with Label Distribution


  • Ning Xu Southeast University
  • Yun-Peng Liu Southeast University
  • Xin Geng Southeast University



Partial multi-label learning (PML) aims to learn from training examples each associated with a set of candidate labels, among which only a subset are valid for the training example. The common strategy to induce predictive model is trying to disambiguate the candidate label set, such as identifying the ground-truth label via utilizing the confidence of each candidate label or estimating the noisy labels in the candidate label sets. Nonetheless, these strategies ignore considering the essential label distribution corresponding to each instance since the label distribution is not explicitly available in the training set. In this paper, a new partial multi-label learning strategy named Pml-ld is proposed to learn from partial multi-label examples via label enhancement. Specifically, label distributions are recovered by leveraging the topological information of the feature space and the correlations among the labels. After that, a multi-class predictive model is learned by fitting a regularized multi-output regressor with the recovered label distributions. Experimental results on synthetic as well as real-world datasets clearly validate the effectiveness of Pml-ld for solving PML problems.




How to Cite

Xu, N., Liu, Y.-P., & Geng, X. (2020). Partial Multi-Label Learning with Label Distribution. Proceedings of the AAAI Conference on Artificial Intelligence, 34(04), 6510-6517.



AAAI Technical Track: Machine Learning