MASS: Overcoming Language Bias in Image-Text Matching

Authors

  • Jiwan Chung Yonsei University
  • Seungwon Lim Yonsei University
  • Sangkyu Lee Yonsei University
  • Youngjae Yu Yonsei University

DOI:

https://doi.org/10.1609/aaai.v39i3.32262

Abstract

Pretrained visual-language models have made significant advancements in multimodal tasks, including image-text retrieval. However, a major challenge in image-text matching lies in language bias, where models predominantly rely on language priors and neglect to adequately consider the visual content. We thus present Multimodal ASsociation Score (MASS), a framework that reduces the reliance on language priors for better visual accuracy in image-text matching problems. It can be seamlessly incorporated into existing visual-language models without necessitating additional training. Our experiments have shown that \modelname effectively lessens language bias without losing an understanding of linguistic compositionality. Overall, MASS offers a promising solution for enhancing image-text matching performance in visual-language models.

Published

2025-04-11

How to Cite

Chung, J., Lim, S., Lee, S., & Yu, Y. (2025). MASS: Overcoming Language Bias in Image-Text Matching. Proceedings of the AAAI Conference on Artificial Intelligence, 39(3), 2591–2599. https://doi.org/10.1609/aaai.v39i3.32262

Issue

Section

AAAI Technical Track on Computer Vision II