CFDM: Contrastive Fusion and Disambiguation for Multi-View Partial-Label Learning

Authors

  • Qiuru Hai College of Computer Science, Beijing University of Technology
  • Yongjian Deng College of Computer Science, Beijing University of Technology
  • Yuena Lin College of Computer Science, Beijing University of Technology Idealism Beijing Technology Co., Ltd.
  • Zheng Li College of Computer Science, Beijing University of Technology
  • Zhen Yang College of Computer Science, Beijing University of Technology
  • Gengyu Lyu College of Computer Science, Beijing University of Technology

DOI:

https://doi.org/10.1609/aaai.v39i16.33869

Abstract

When dealing with multi-view data, the heterogeneity of data attributes across different views often leads to label ambiguity. To effectively address this challenge, this paper designs a Multi-View Partial-Label Learning (MVPLL) framework, where each training instance is described by multiple view features and associated with a set of candidate labels, among which only one is correct. The key to deal with such problem lies in how to effectively fuse multi-view information and accurately disambiguate these ambiguous labels. In this paper, we propose a novel approach named CFDM, which explores the consistency and complementarity of multi-view data by multi-view contrastive fusion and reduces label ambiguity by multi-class contrastive prototype disambiguation. Specifically, we first extract view-specific representations using multiple view-specific autoencoders, and then integrate multi-view information through both inter-view and intra-view contrastive fusion to enhance the distinctiveness of these representations. Afterwards, we utilize these distinctive representations to establish and update prototype vectors for each class within each view. Based on these, we apply contrastive prototype disambiguation to learn global class prototypes and accordingly reduce label ambiguity. In our model, multi-view contrastive fusion and multi-class contrastive prototype disambiguation are conducted mutually to enhance each other within a coherent framework, leading to a more ideal classification performance. Experimental results on multiple datasets have demonstrated that our proposed method is superior to other state-of-the-art methods.

Downloads

Published

2025-04-11

How to Cite

Hai, Q., Deng, Y., Lin, Y., Li, Z., Yang, Z., & Lyu, G. (2025). CFDM: Contrastive Fusion and Disambiguation for Multi-View Partial-Label Learning. Proceedings of the AAAI Conference on Artificial Intelligence, 39(16), 17005-17013. https://doi.org/10.1609/aaai.v39i16.33869

Issue

Section

AAAI Technical Track on Machine Learning II