VQA4CIR: Boosting Composed Image Retrieval with Visual Question Answering

Authors

  • Chun-Mei Feng Institute of High Performance Computing, Singapore, A*STAR
  • Yang Bai Institute of High Performance Computing, Singapore, A*STAR
  • Tao Luo Institute of High Performance Computing, Singapore, A*STAR
  • Zhen Li The Chinese University of Hong Kong, Shenzhen
  • Salman Khan Mohamed bin Zayed University of Artificial Intelligence, (MBZUAI), UAE Australian National University, Canberra ACT, Australia
  • Wangmeng Zuo Harbin Institute of Technology
  • Rick Siow Mong Goh Institute of High Performance Computing, Singapore, A*STAR
  • Yong Liu Institute of High Performance Computing, Singapore, A*STAR

DOI:

https://doi.org/10.1609/aaai.v39i3.32301

Abstract

Albeit progress has been made in Composed Image Retrieval (CIR), we empirically find that a certain percentage of failure retrieval results are not consistent with their relative captions. To address this issue, this work provides a Visual Question Answering (VQA) perspective to boost the performance of CIR. The resulting VQA4CIR is a post-processing approach and can be directly plugged into existing CIR methods. Given the top-C retrieved images by a CIR method, VQA4CIR aims to decrease the adverse effect of the failure retrieval results being inconsistent with the relative caption. To find the retrieved images inconsistent with the relative caption, we resort to the "QA generation → VQA" self-verification pipeline. For QA generation, we suggest fine-tuning LLM (e.g., LLaMA) to generate several pairs of questions and answers from each relative caption. We then fine-tune LVLM (e.g., LLaVA) to obtain the VQA model. By feeding the retrieved image and question to the VQA model, one can find the images inconsistent with relative caption when the answer by VQA is inconsistent with the answer in the QA pair. Consequently, the CIR performance can be boosted by modifying the ranks of inconsistently retrieved images. Experimental results show that our proposed method outperforms state-of-the-art CIR methods on the CIRR and Fashion-IQ datasets.

Downloads

Published

2025-04-11

How to Cite

Feng, C.-M., Bai, Y., Luo, T., Li, Z., Khan, S., Zuo, W., … Liu, Y. (2025). VQA4CIR: Boosting Composed Image Retrieval with Visual Question Answering. Proceedings of the AAAI Conference on Artificial Intelligence, 39(3), 2942–2950. https://doi.org/10.1609/aaai.v39i3.32301

Issue

Section

AAAI Technical Track on Computer Vision II