Imagined Visual Representations as Multimodal Embeddings

Authors

  • Guillem Collell Katholieke Universiteit Leuven
  • Ted Zhang Katholieke Universiteit Leuven
  • Marie-Francine Moens Katholieke Universiteit Leuven

DOI:

https://doi.org/10.1609/aaai.v31i1.11155

Keywords:

multimodal representations, representation learning, semantic similarity, semantic relatedness, visual similarity

Abstract

Language and vision provide complementary information. Integrating both modalities in a single multimodal representation is an unsolved problem with wide-reaching applications to both natural language processing and computer vision. In this paper, we present a simple and effective method that learns a language-to-vision mapping and uses its output visual predictions to build multimodal representations. In this sense, our method provides a cognitively plausible way of building representations, consistent with the inherently re-constructive and associative nature of human memory. Using seven benchmark concept similarity tests we show that the mapped (or imagined) vectors not only help to fuse multimodal information, but also outperform strong unimodal baselines and state-of-the-art multimodal methods, thus exhibiting more human-like judgments. Ultimately, the present work sheds light on fundamental questions of natural language understanding concerning the fusion of vision and language such as the plausibility of more associative and re-constructive approaches.

Downloads

Published

2017-02-12

How to Cite

Collell, G., Zhang, T., & Moens, M.-F. (2017). Imagined Visual Representations as Multimodal Embeddings. Proceedings of the AAAI Conference on Artificial Intelligence, 31(1). https://doi.org/10.1609/aaai.v31i1.11155