Can Label-Specific Features Help Partial-Label Learning?
DOI:
https://doi.org/10.1609/aaai.v37i6.25904Keywords:
ML: Semi-Supervised Learning, ML: Multi-Class/Multi-Label Learning & Extreme Classification, ML: Multi-Instance/Multi-View LearningAbstract
Partial label learning (PLL) aims to learn from inexact data annotations where each training example is associated with a coarse candidate label set. Due to its practicability, many PLL algorithms have been proposed in recent literature. Most prior PLL works attempt to identify the ground-truth labels from candidate sets and the classifier is trained afterward by fitting the features of examples and their exact ground-truth labels. From a different perspective, we propose to enrich the feature space and raise the question ``Can label-specific features help PLL?'' rather than learning from examples with identical features for all classes. Despite its benefits, previous label-specific feature approaches rely on ground-truth labels to split positive and negative examples of each class and then conduct clustering analysis, which is not directly applicable in PLL. To remedy this problem, we propose an uncertainty-aware confidence region to accommodate false positive labels. We first employ graph-based label enhancement to yield smooth pseudo-labels and facilitate the confidence region split. After acquiring label-specific features, a family of binary classifiers is induced. Extensive experiments on both synthesized and real-world datasets are conducted and the results show that our method consistently outperforms eight baselines. Our code is released at https://github.com/meteoseeker/UCLDownloads
Published
2023-06-26
How to Cite
Dong, R.-J., Hang, J.-Y., Wei, T., & Zhang, M.-L. (2023). Can Label-Specific Features Help Partial-Label Learning?. Proceedings of the AAAI Conference on Artificial Intelligence, 37(6), 7432-7440. https://doi.org/10.1609/aaai.v37i6.25904
Issue
Section
AAAI Technical Track on Machine Learning I