Associative Variational Auto-Encoder with Distributed Latent Spaces and Associators
DOI:
https://doi.org/10.1609/aaai.v34i07.6778Abstract
In this paper, we propose a novel structure for a multi-modal data association referred to as Associative Variational Auto-Encoder (AVAE). In contrast to the existing models using a shared latent space among modalities, our structure adopts distributed latent spaces for multi-modalities which are connected through cross-modal associators. The proposed structure successfully associates even heterogeneous modality data and easily incorporates the additional modality to the entire network via the associator. Furthermore, in our structure, only a small amount of supervised (paired) data is enough to train associators after training auto-encoders in an unsupervised manner. Through experiments, the effectiveness of the proposed structure is validated on various datasets including visual and auditory data.