SlideVQA: A Dataset for Document Visual Question Answering on Multiple Images

Authors

  • Ryota Tanaka NTT Human Informatics Laboratories
  • Kyosuke Nishida NTT Human Informatics Laboratories
  • Kosuke Nishida NTT Human Informatics Laboratories
  • Taku Hasegawa NTT Human Informatics Laboratories
  • Itsumi Saito NTT Human Informatics Laboratories
  • Kuniko Saito NTT Human Informatics Laboratories

DOI:

https://doi.org/10.1609/aaai.v37i11.26598

Keywords:

SNLP: Question Answering, CV: Language and Vision

Abstract

Visual question answering on document images that contain textual, visual, and layout information, called document VQA, has received much attention recently. Although many datasets have been proposed for developing document VQA systems, most of the existing datasets focus on understanding the content relationships within a single image and not across multiple images. In this study, we propose a new multi-image document VQA dataset, SlideVQA, containing 2.6k+ slide decks composed of 52k+ slide images and 14.5k questions about a slide deck. SlideVQA requires complex reasoning, including single-hop, multi-hop, and numerical reasoning, and also provides annotated arithmetic expressions of numerical answers for enhancing the ability of numerical reasoning. Moreover, we developed a new end-to-end document VQA model that treats evidence selection and question answering as a unified sequence-to-sequence format. Experiments on SlideVQA show that our model outperformed existing state-of-the-art QA models, but that it still has a large gap behind human performance. We believe that our dataset will facilitate research on document VQA.

Downloads

Published

2023-06-26

How to Cite

Tanaka, R., Nishida, K., Nishida, K., Hasegawa, T., Saito, I., & Saito, K. (2023). SlideVQA: A Dataset for Document Visual Question Answering on Multiple Images. Proceedings of the AAAI Conference on Artificial Intelligence, 37(11), 13636-13645. https://doi.org/10.1609/aaai.v37i11.26598

Issue

Section

AAAI Technical Track on Speech & Natural Language Processing