Mining Fine-Grained Image-Text Alignment for Zero-Shot Captioning via Text-Only Training

Authors

  • Longtian Qiu ShanghaiTech University
  • Shan Ning ShanghaiTech University
  • Xuming He ShanghaiTech University Shanghai Engineering Research Center of Intelligent Vision and Imaging

DOI:

https://doi.org/10.1609/aaai.v38i5.28260

Keywords:

CV: Language and Vision, NLP: Generation

Abstract

Image captioning aims at generating descriptive and meaningful textual descriptions of images, enabling a broad range of vision-language applications. Prior works have demonstrated that harnessing the power of Contrastive Image Language Pre-training (CLIP) offers a promising approach to achieving zero-shot captioning, eliminating the need for expensive caption annotations. However, the widely observed modality gap in the latent space of CLIP harms the performance of zero-shot captioning by breaking the alignment between paired image-text features. To address this issue, we conduct an analysis on the CLIP latent space which leads to two findings. Firstly, we observe that the CLIP's visual feature of image subregions can achieve closer proximity to the paired caption due to the inherent information loss in text descriptions. In addition, we show that the modality gap between a paired image-text can be empirically modeled as a zero-mean Gaussian distribution. Motivated by the findings, we propose a novel zero-shot image captioning framework with text-only training to reduce the modality gap. In particular, we introduce a subregion feature aggregation to leverage local region information, which produces a compact visual representation for matching text representation. Moreover, we incorporate a noise injection and CLIP reranking strategy to boost captioning performance. We also extend our framework to build a zero-shot VQA pipeline, demonstrating its generality. Through extensive experiments on common captioning and VQA datasets such as MSCOCO, Flickr30k and VQAV2, we show that our method achieves remarkable performance improvements. Code is available at https://github.com/Artanic30/MacCap.

Published

2024-03-24

How to Cite

Qiu, L., Ning, S., & He, X. (2024). Mining Fine-Grained Image-Text Alignment for Zero-Shot Captioning via Text-Only Training. Proceedings of the AAAI Conference on Artificial Intelligence, 38(5), 4605-4613. https://doi.org/10.1609/aaai.v38i5.28260

Issue

Section

AAAI Technical Track on Computer Vision IV