Learning Cross-Aligned Latent Embeddings for Zero-Shot Cross-Modal Retrieval

Authors

  • Kaiyi Lin University of Electronic Science and Technology of China
  • Xing Xu University of Electronic Science and Technology of China
  • Lianli Gao University of Electronic Science and Technology of China
  • Zheng Wang University of Electronic Science and Technology of China
  • Heng Tao Shen University of Electronic Science and Technology of China

DOI:

https://doi.org/10.1609/aaai.v34i07.6817

Abstract

Zero-Shot Cross-Modal Retrieval (ZS-CMR) is an emerging research hotspot that aims to retrieve data of new classes across different modality data. It is challenging for not only the heterogeneous distributions across different modalities, but also the inconsistent semantics across seen and unseen classes. A handful of recently proposed methods typically borrow the idea from zero-shot learning, i.e., exploiting word embeddings of class labels (i.e., class-embeddings) as common semantic space, and using generative adversarial network (GAN) to capture the underlying multimodal data structures, as well as strengthen relations between input data and semantic space to generalize across seen and unseen classes. In this paper, we propose a novel method termed Learning Cross-Aligned Latent Embeddings (LCALE) as an alternative to these GAN based methods for ZS-CMR. Unlike using the class-embeddings as the semantic space, our method seeks for a shared low-dimensional latent space of input multimodal features and class-embeddings by modality-specific variational autoencoders. Notably, we align the distributions learned from multimodal input features and from class-embeddings to construct latent embeddings that contain the essential cross-modal correlation associated with unseen classes. Effective cross-reconstruction and cross-alignment criterions are further developed to preserve class-discriminative information in latent space, which benefits the efficiency for retrieval and enable the knowledge transfer to unseen classes. We evaluate our model using four benchmark datasets on image-text retrieval tasks and one large-scale dataset on image-sketch retrieval tasks. The experimental results show that our method establishes the new state-of-the-art performance for both tasks on all datasets.

Downloads

Published

2020-04-03

How to Cite

Lin, K., Xu, X., Gao, L., Wang, Z., & Shen, H. T. (2020). Learning Cross-Aligned Latent Embeddings for Zero-Shot Cross-Modal Retrieval. Proceedings of the AAAI Conference on Artificial Intelligence, 34(07), 11515-11522. https://doi.org/10.1609/aaai.v34i07.6817

Issue

Section

AAAI Technical Track: Vision