Multi-Label Supervised Contrastive Learning
DOI:
https://doi.org/10.1609/aaai.v38i15.29619Keywords:
ML: Multi-class/Multi-label Learning & Extreme Classification, ML: Representation Learning, ML: Deep Learning AlgorithmsAbstract
Multi-label classification is an arduous problem given the complication in label correlation. Whilst sharing a common goal with contrastive learning in utilizing correlations for representation learning, how to better leverage label information remains challenging. Previous endeavors include extracting label-level presentations or mapping labels to an embedding space, overlooking the correlation between multiple labels. It exhibits a great ambiguity in determining positive samples with different extent of label overlap between samples and integrating such relations in loss functions. In our work, we propose Multi-Label Supervised Contrastive learning (MulSupCon) with a novel contrastive loss function to adjust weights based on how much overlap one sample shares with the anchor. By analyzing gradients, we explain why our method performs better under multi-label circumstances. To evaluate, we conduct direct classification and transfer learning on several multi-label datasets, including widely-used image datasets such as MS-COCO and NUS-WIDE. Validation indicates that our method outperforms the traditional multi-label classification method and shows a competitive performance when comparing to other existing approaches.Downloads
Published
2024-03-24
How to Cite
Zhang, P., & Wu, M. (2024). Multi-Label Supervised Contrastive Learning. Proceedings of the AAAI Conference on Artificial Intelligence, 38(15), 16786-16793. https://doi.org/10.1609/aaai.v38i15.29619
Issue
Section
AAAI Technical Track on Machine Learning VI