Deep Embedded Complementary and Interactive Information for Multi-View Classification


  • Jinglin Xu Northwestern Polytechnical University
  • Wenbin Li Nanjing University
  • Xinwang Liu National University of Defense Technology
  • Dingwen Zhang Xidian University
  • Ji Liu Kwai Inc.
  • Junwei Han Northwestern Polytechnical University



Multi-view classification optimally integrates various features from different views to improve classification tasks. Though most of the existing works demonstrate promising performance in various computer vision applications, we observe that they can be further improved by sufficiently utilizing complementary view-specific information, deep interactive information between different views, and the strategy of fusing various views. In this work, we propose a novel multi-view learning framework that seamlessly embeds various view-specific information and deep interactive information and introduces a novel multi-view fusion strategy to make a joint decision during the optimization for classification. Specifically, we utilize different deep neural networks to learn multiple view-specific representations, and model deep interactive information through a shared interactive network using the cross-correlations between attributes of these representations. After that, we adaptively integrate multiple neural networks by flexibly tuning the power exponent of weight, which not only avoids the trivial solution of weight but also provides a new approach to fuse outputs from different deterministic neural networks. Extensive experiments on several public datasets demonstrate the rationality and effectiveness of our method.




How to Cite

Xu, J., Li, W., Liu, X., Zhang, D., Liu, J., & Han, J. (2020). Deep Embedded Complementary and Interactive Information for Multi-View Classification. Proceedings of the AAAI Conference on Artificial Intelligence, 34(04), 6494-6501.



AAAI Technical Track: Machine Learning