Retrieval-Augmented Visual Question Answering via Built-in Autoregressive Search Engines
DOI:
https://doi.org/10.1609/aaai.v39i23.34653Abstract
Retrieval-augmented generation (RAG) has emerged to address the knowledge-intensive visual question answering (VQA) task. Current methods mainly employ separate retrieval and generation modules to acquire external knowledge and generate answers, respectively. We propose ReAuSE, an alternative to the previous RAG model for the knowledge-based VQA task, which seamlessly integrates knowledge retriever into the generative multi-modal large language model, serving as a built-in search engine. Specifically, our model functions both as a generative retriever and an accurate answer generator. It not only helps retrieve documents from the knowledge base by producing identifier for each document, but it also answers visual questions based on the retrieved documents. Furthermore, we also propose a reinforced retrieval calibration module from relevance feedback to improve retrieval performance and align with the preferences for accurate answer generation. Extensive experiments on two representative OKVQA and A-OKVQA datasets demonstrate significant improvements ranging from 2.9% to 9.6% across all evaluation metrics when compared to strong baselines.Downloads
Published
2025-04-11
How to Cite
Long, X., Ma, Z., Hua, E., Zhang, K., Qi, B., & Zhou, B. (2025). Retrieval-Augmented Visual Question Answering via Built-in Autoregressive Search Engines. Proceedings of the AAAI Conference on Artificial Intelligence, 39(23), 24723–24731. https://doi.org/10.1609/aaai.v39i23.34653
Issue
Section
AAAI Technical Track on Natural Language Processing II