Multi-View Information-Bottleneck Representation Learning
AbstractIn real-world applications, clustering or classification can usually be improved by fusing information from different views. Therefore, unsupervised representation learning on multi-view data becomes a compelling topic in machine learning. In this paper, we propose a novel and flexible unsupervised multi-view representation learning model termed Collaborative Multi-View Information Bottleneck Networks (CMIB-Nets), which comprehensively explores the common latent structure and the view-specific intrinsic information, and discards the superfluous information in the data significantly improving the generalization capability of the model. Specifically, our proposed model relies on the information bottleneck principle to integrate the shared representation among different views and the view-specific representation of each view, prompting the multi-view complete representation and flexibly balancing the complementarity and consistency among multiple views. We conduct extensive experiments (including clustering analysis, robustness experiment, and ablation study) on real-world datasets, which empirically show promising generalization ability and robustness compared to state-of-the-arts.
How to Cite
Wan, Z., Zhang, C., Zhu, P., & Hu, Q. (2021). Multi-View Information-Bottleneck Representation Learning. Proceedings of the AAAI Conference on Artificial Intelligence, 35(11), 10085-10092. Retrieved from https://ojs.aaai.org/index.php/AAAI/article/view/17210
AAAI Technical Track on Machine Learning IV