Progressive Deep Multi-View Comprehensive Representation Learning

Authors

  • Cai Xu Xidian University
  • Wei Zhao Xidian University
  • Jinglong Zhao Xidian University
  • Ziyu Guan Xidian University
  • Yaming Yang Xidian University
  • Long Chen Xi’an University of Posts & Telecommunications
  • Xiangyu Song Deakin University

DOI:

https://doi.org/10.1609/aaai.v37i9.26254

Keywords:

ML: Multi-Instance/Multi-View Learning, ML: Classification and Regression, ML: Multimodal Learning, ML: Representation Learning

Abstract

Multi-view Comprehensive Representation Learning (MCRL) aims to synthesize information from multiple views to learn comprehensive representations of data items. Prevalent deep MCRL methods typically concatenate synergistic view-specific representations or average aligned view-specific representations in the fusion stage. However, the performance of synergistic fusion methods inevitably degenerate or even fail when partial views are missing in real-world applications; the aligned based fusion methods usually cannot fully exploit the complementarity of multi-view data. To eliminate all these drawbacks, in this work we present a Progressive Deep Multi-view Fusion (PDMF) method. Considering the multi-view comprehensive representation should contain complete information and the view-specific data contain partial information, we deem that it is unstable to directly learn the mapping from partial information to complete information. Hence, PDMF employs a progressive learning strategy, which contains the pre-training and fine-tuning stages. In the pre-training stage, PDMF decodes the auxiliary comprehensive representation to the view-specific data. It also captures the consistency and complementarity by learning the relations between the dimensions of the auxiliary comprehensive representation and all views. In the fine-tuning stage, PDMF learns the mapping from the original data to the comprehensive representation with the help of the auxiliary comprehensive representation and relations. Experiments conducted on a synthetic toy dataset and 4 real-world datasets show that PDMF outperforms state-of-the-art baseline methods. The code is released at https://github.com/winterant/PDMF.

Downloads

Published

2023-06-26

How to Cite

Xu, C., Zhao, W., Zhao, J., Guan, Z., Yang, Y., Chen, L., & Song, X. (2023). Progressive Deep Multi-View Comprehensive Representation Learning. Proceedings of the AAAI Conference on Artificial Intelligence, 37(9), 10557-10565. https://doi.org/10.1609/aaai.v37i9.26254

Issue

Section

AAAI Technical Track on Machine Learning IV