Multi-View Information-Bottleneck Representation Learning

Authors

  • Zhibin Wan Tianjin University
  • Changqing Zhang Tianjin university Tianjin Key Lab of Machine Learning, Tianjin, China
  • Pengfei Zhu Tianjin university Tianjin Key Lab of Machine Learning, Tianjin, China
  • Qinghua Hu Tianjin University Tianjin Key Lab of Machine Learning, Tianjin, China

DOI:

https://doi.org/10.1609/aaai.v35i11.17210

Keywords:

Multi-instance/Multi-view Learning

Abstract

In real-world applications, clustering or classification can usually be improved by fusing information from different views. Therefore, unsupervised representation learning on multi-view data becomes a compelling topic in machine learning. In this paper, we propose a novel and flexible unsupervised multi-view representation learning model termed Collaborative Multi-View Information Bottleneck Networks (CMIB-Nets), which comprehensively explores the common latent structure and the view-specific intrinsic information, and discards the superfluous information in the data significantly improving the generalization capability of the model. Specifically, our proposed model relies on the information bottleneck principle to integrate the shared representation among different views and the view-specific representation of each view, prompting the multi-view complete representation and flexibly balancing the complementarity and consistency among multiple views. We conduct extensive experiments (including clustering analysis, robustness experiment, and ablation study) on real-world datasets, which empirically show promising generalization ability and robustness compared to state-of-the-arts.

Downloads

Published

2021-05-18

How to Cite

Wan, Z., Zhang, C., Zhu, P., & Hu, Q. (2021). Multi-View Information-Bottleneck Representation Learning. Proceedings of the AAAI Conference on Artificial Intelligence, 35(11), 10085-10092. https://doi.org/10.1609/aaai.v35i11.17210

Issue

Section

AAAI Technical Track on Machine Learning IV