Unifying Vision-Language Representation Space with Single-Tower Transformer

Authors

  • Jiho Jang Seoul National University
  • Chaerin Kong Seoul National University
  • DongHyeon Jeon Naver
  • Seonhoon Kim Coupang
  • Nojun Kwak Seoul National University

DOI:

https://doi.org/10.1609/aaai.v37i1.25178

Keywords:

CV: Language and Vision, ML: Representation Learning, ML: Unsupervised & Self-Supervised Learning

Abstract

Contrastive learning is a form of distance learning that aims to learn invariant features from two related representations. In this work, we explore the hypothesis that an image and caption can be regarded as two different views of the underlying mutual information, and train a model to learn a unified vision-language representation space that encodes both modalities at once in a modality-agnostic manner. We first identify difficulties in learning a one-tower model for vision-language pretraining (VLP), and propose One Representation (OneR) as a simple yet effective framework for our goal. We discover intriguing properties that distinguish OneR from the previous works that have modality-specific representation spaces such as zero-shot localization, text-guided visual reasoning and multi-modal retrieval, and present analyses to provide insights into this new form of multi-modal representation learning. Thorough evaluations demonstrate the potential of a unified modality-agnostic VLP framework.

Downloads

Published

2023-06-26

How to Cite

Jang, J., Kong, C., Jeon, D., Kim, S., & Kwak, N. (2023). Unifying Vision-Language Representation Space with Single-Tower Transformer. Proceedings of the AAAI Conference on Artificial Intelligence, 37(1), 980-988. https://doi.org/10.1609/aaai.v37i1.25178

Issue

Section

AAAI Technical Track on Computer Vision I