Multimodal Adversarially Learned Inference with Factorized Discriminators
DOI:
https://doi.org/10.1609/aaai.v36i6.20580Keywords:
Machine Learning (ML), Computer Vision (CV)Abstract
Learning from multimodal data is an important research topic in machine learning, which has the potential to obtain better representations. In this work, we propose a novel approach to generative modeling of multimodal data based on generative adversarial networks. To learn a coherent multimodal generative model, we show that it is necessary to align different encoder distributions with the joint decoder distribution simultaneously. To this end, we construct a specific form of the discriminator to enable our model to utilize data efficiently, which can be trained constrastively. By taking advantage of contrastive learning through factorizing the discriminator, we train our model on unimodal data. We have conducted experiments on the benchmark datasets, whose promising results show that our proposed approach outperforms the-state-ofthe-art methods on a variety of metrics. The source code is publicly available at https://github.com/6b5d/mmali.Downloads
Published
2022-06-28
How to Cite
Chen, W., & Zhu, J. (2022). Multimodal Adversarially Learned Inference with Factorized Discriminators. Proceedings of the AAAI Conference on Artificial Intelligence, 36(6), 6304-6312. https://doi.org/10.1609/aaai.v36i6.20580
Issue
Section
AAAI Technical Track on Machine Learning I