Improving Automatic VQA Evaluation Using Large Language Models

Authors

  • Oscar Mañas Mila - Quebec AI Institute Université de Montréal
  • Benno Krojer Mila - Quebec AI Institute McGill University
  • Aishwarya Agrawal Mila - Quebec AI Institute Université de Montréal

DOI:

https://doi.org/10.1609/aaai.v38i5.28212

Keywords:

CV: Language and Vision, ML: Evaluation and Analysis, NLP: (Large) Language Models

Abstract

8 years after the visual question answering (VQA) task was proposed, accuracy remains the primary metric for automatic evaluation. VQA Accuracy has been effective so far in the IID evaluation setting. However, our community is undergoing a shift towards open-ended generative models and OOD evaluation. In this new paradigm, the existing VQA Accuracy metric is overly stringent and underestimates the performance of VQA systems. Thus, there is a need to develop more robust automatic VQA metrics that serve as a proxy for human judgment. In this work, we propose to leverage the in-context learning capabilities of instruction-tuned large language models (LLMs) to build a better VQA metric. We formulate VQA evaluation as an answer-rating task where the LLM is instructed to score the accuracy of a candidate answer given a set of reference answers. We demonstrate the proposed metric better correlates with human judgment compared to existing metrics across several VQA models and benchmarks. We hope wide adoption of our metric will contribute to better estimating the research progress on the VQA task. We plan to release the evaluation code and collected human judgments.

Published

2024-03-24

How to Cite

Mañas, O., Krojer, B., & Agrawal, A. (2024). Improving Automatic VQA Evaluation Using Large Language Models. Proceedings of the AAAI Conference on Artificial Intelligence, 38(5), 4171-4179. https://doi.org/10.1609/aaai.v38i5.28212

Issue

Section

AAAI Technical Track on Computer Vision IV