BridgeTower: Building Bridges between Encoders in Vision-Language Representation Learning

Authors

  • Xiao Xu Research Center for Social Computing and Information Retrieval, Harbin Institute of Technology Microsoft Research Asia
  • Chenfei Wu Microsoft Research Asia
  • Shachar Rosenman Intel Labs, Cognitive Computing Research
  • Vasudev Lal Intel Labs, Cognitive Computing Research
  • Wanxiang Che Research Center for Social Computing and Information Retrieval, Harbin Institute of Technology
  • Nan Duan Microsoft Research Asia

DOI:

https://doi.org/10.1609/aaai.v37i9.26263

Keywords:

ML: Multimodal Learning, SNLP: Speech and Multimodality

Abstract

Vision-Language (VL) models with the Two-Tower architecture have dominated visual-language representation learning in recent years. Current VL models either use lightweight uni-modal encoders and learn to extract, align and fuse both modalities simultaneously in a deep cross-modal encoder, or feed the last-layer uni-modal representations from the deep pre-trained uni-modal encoders into the top cross-modal encoder. Both approaches potentially restrict vision-language representation learning and limit model performance. In this paper, we propose BridgeTower, which introduces multiple bridge layers that build a connection between the top layers of uni-modal encoders and each layer of the cross-modal encoder. This enables effective bottom-up cross-modal alignment and fusion between visual and textual representations of different semantic levels of pre-trained uni-modal encoders in the cross-modal encoder. Pre-trained with only 4M images, BridgeTower achieves state-of-the-art performance on various downstream vision-language tasks. In particular, on the VQAv2 test-std set, BridgeTower achieves an accuracy of 78.73%, outperforming the previous state-of-the-art model METER by 1.09% with the same pre-training data and almost negligible additional parameters and computational costs. Notably, when further scaling the model, BridgeTower achieves an accuracy of 81.15%, surpassing models that are pre-trained on orders-of-magnitude larger datasets. Code and checkpoints are available at https://github.com/microsoft/BridgeTower.

Downloads

Published

2023-06-26

How to Cite

Xu, X., Wu, C., Rosenman, S., Lal, V., Che, W., & Duan, N. (2023). BridgeTower: Building Bridges between Encoders in Vision-Language Representation Learning. Proceedings of the AAAI Conference on Artificial Intelligence, 37(9), 10637-10647. https://doi.org/10.1609/aaai.v37i9.26263

Issue

Section

AAAI Technical Track on Machine Learning IV