VisualMRC: Machine Reading Comprehension on Document Images

Authors

  • Ryota Tanaka NTT Media Intelligence Laboratories, NTT Corporation
  • Kyosuke Nishida NTT Media Intelligence Laboratories, NTT Corporation
  • Sen Yoshida NTT Media Intelligence Laboratories, NTT Corporation

DOI:

https://doi.org/10.1609/aaai.v35i15.17635

Keywords:

Language Grounding & Multi-modal NLP, Question Answering, Generation, Language Models

Abstract

Recent studies on machine reading comprehension have focused on text-level understanding but have not yet reached the level of human understanding of the visual layout and content of real-world documents. In this study, we introduce a new visual machine reading comprehension dataset, named VisualMRC, wherein given a question and a document image, a machine reads and comprehends texts in the image to answer the question in natural language. Compared with existing visual question answering datasets that contain texts in images, VisualMRC focuses more on developing natural language understanding and generation abilities. It contains 30,000+ pairs of a question and an abstractive answer for 10,000+ document images sourced from multiple domains of webpages. We also introduce a new model that extends existing sequence-to-sequence models, pre-trained with large-scale text corpora, to take into account the visual layout and content of documents. Experiments with VisualMRC show that this model outperformed the base sequence-to-sequence models and a state-of-the-art VQA model. However, its performance is still below that of humans on most automatic evaluation metrics. The dataset will facilitate research aimed at connecting vision and language understanding.

Downloads

Published

2021-05-18

How to Cite

Tanaka, R., Nishida, K., & Yoshida, S. (2021). VisualMRC: Machine Reading Comprehension on Document Images. Proceedings of the AAAI Conference on Artificial Intelligence, 35(15), 13878-13888. https://doi.org/10.1609/aaai.v35i15.17635

Issue

Section

AAAI Technical Track on Speech and Natural Language Processing II