CMCGAN: A Uniform Framework for Cross-Modal Visual-Audio Mutual Generation

Authors

  • Wangli Hao CASIA; Institute of Automation, University of Chinese Academy of Sciences
  • Zhaoxiang Zhang CASIA; CAS; University of Chinese Academy of Sciences
  • He Guan CASIA; Institute of Automation, University of Chinese Academy of Sciences

DOI:

https://doi.org/10.1609/aaai.v32i1.12329

Keywords:

cross-modal, visual-audio generation

Abstract

Visual and audio modalities are two symbiotic modalities underlying videos, which contain both common and complementary information. If they can be mined and fused sufficiently, performances of related video tasks can be significantly enhanced. However, due to the environmental interference or sensor fault, sometimes, only one modality exists while the other is abandoned or missing. By recovering the missing modality from the existing one based on the common information shared between them and the prior information of the specific modality, great bonus will be gained for various vision tasks. In this paper, we propose a Cross-Modal Cycle Generative Adversarial Network (CMCGAN) to handle cross-modal visual-audio mutual generation. Specifically, CMCGAN is composed of four kinds of subnetworks: audio-to-visual, visual-to-audio, audio-to-audio and visual-to-visual subnetworks respectively, which are organized in a cycle architecture. CMCGAN has several remarkable advantages. Firstly, CMCGAN unifies visual-audio mutual generation into a common framework by a joint corresponding adversarial loss. Secondly, through introducing a latent vector with Gaussian distribution, CMCGAN can handle dimension and structure asymmetry over visual and audio modalities effectively. Thirdly, CMCGAN can be trained end-to-end to achieve better convenience. Benefiting from CMCGAN, we develop a dynamic multimodal classification network to handle the modality missing problem. Abundant experiments have been conducted and validate that CMCGAN obtains the state-of-the-art cross-modal visual-audio generation results. Furthermore, it is shown that the generated modality achieves comparable effects with those of original modality, which demonstrates the effectiveness and advantages of our proposed method.

Downloads

Published

2018-04-27

How to Cite

Hao, W., Zhang, Z., & Guan, H. (2018). CMCGAN: A Uniform Framework for Cross-Modal Visual-Audio Mutual Generation. Proceedings of the AAAI Conference on Artificial Intelligence, 32(1). https://doi.org/10.1609/aaai.v32i1.12329