Incomplete Multi-View Multi-Label Learning via Label-Guided Masked View- and Category-Aware Transformers

Authors

  • Chengliang Liu Harbin Institute of Technology, Shenzhen
  • Jie Wen Harbin Institute of Technology, Shenzhen
  • Xiaoling Luo Harbin Institute of Technology, Shenzhen
  • Yong Xu Harbin Institute of Technology, Shenzhen Pengcheng Laboratory

DOI:

https://doi.org/10.1609/aaai.v37i7.26060

Keywords:

ML: Multi-Instance/Multi-View Learning, ML: Multi-Class/Multi-Label Learning & Extreme Classification, ML: Multimodal Learning, ML: Representation Learning

Abstract

As we all know, multi-view data is more expressive than single-view data and multi-label annotation enjoys richer supervision information than single-label, which makes multi-view multi-label learning widely applicable for various pattern recognition tasks. In this complex representation learning problem, three main challenges can be characterized as follows: i) How to learn consistent representations of samples across all views? ii) How to exploit and utilize category correlations of multi-label to guide inference? iii) How to avoid the negative impact resulting from the incompleteness of views or labels? To cope with these problems, we propose a general multi-view multi-label learning framework named label-guided masked view- and category-aware transformers in this paper. First, we design two transformer-style based modules for cross-view features aggregation and multi-label classification, respectively. The former aggregates information from different views in the process of extracting view-specific features, and the latter learns subcategory embedding to improve classification performance. Second, considering the imbalance of expressive power among views, an adaptively weighted view fusion module is proposed to obtain view-consistent embedding features. Third, we impose a label manifold constraint in sample-level representation learning to maximize the utilization of supervised information. Last but not least, all the modules are designed under the premise of incomplete views and labels, which makes our method adaptable to arbitrary multi-view and multi-label data. Extensive experiments on five datasets confirm that our method has clear advantages over other state-of-the-art methods.

Downloads

Published

2023-06-26

How to Cite

Liu, C., Wen, J., Luo, X., & Xu, Y. (2023). Incomplete Multi-View Multi-Label Learning via Label-Guided Masked View- and Category-Aware Transformers. Proceedings of the AAAI Conference on Artificial Intelligence, 37(7), 8816-8824. https://doi.org/10.1609/aaai.v37i7.26060

Issue

Section

AAAI Technical Track on Machine Learning II