Deep Robust Unsupervised Multi-Modal Network


  • Yang Yang Nanjing University
  • Yi-Feng Wu Nanjing University
  • De-Chuan Zhan Nanjing University
  • Zhi-Bin Liu Tecent WXG
  • Yuan Jiang Nanjing University



In real-world applications, data are often with multiple modalities, and many multi-modal learning approaches are proposed for integrating the information from different sources. Most of the previous multi-modal methods utilize the modal consistency to reduce the complexity of the learning problem, therefore the modal completeness needs to be guaranteed. However, due to the data collection failures, self-deficiencies, and other various reasons, multi-modal instances are often incomplete in real applications, and have the inconsistent anomalies even in the complete instances, which jointly result in the inconsistent problem. These degenerate the multi-modal feature learning performance, and will finally affect the generalization abilities in different tasks. In this paper, we propose a novel Deep Robust Unsupervised Multi-modal Network structure (DRUMN) for solving this real problem within a unified framework. The proposed DRUMN can utilize the extrinsic heterogeneous information from unlabeled data against the insufficiency caused by the incompleteness. On the other hand, the inconsistent anomaly issue is solved with an adaptive weighted estimation, rather than adjusting the complex thresholds. As DRUMN can extract the discriminative feature representations for each modality, experiments on real-world multimodal datasets successfully validate the effectiveness of our proposed method.




How to Cite

Yang, Y., Wu, Y.-F., Zhan, D.-C., Liu, Z.-B., & Jiang, Y. (2019). Deep Robust Unsupervised Multi-Modal Network. Proceedings of the AAAI Conference on Artificial Intelligence, 33(01), 5652-5659.



AAAI Technical Track: Machine Learning