InstructDoc: A Dataset for Zero-Shot Generalization of Visual Document Understanding with Instructions

Authors

  • Ryota Tanaka NTT Human Informatics Laboratories, NTT Corporation Tohoku University
  • Taichi Iki NTT Human Informatics Laboratories, NTT Corporation
  • Kyosuke Nishida NTT Human Informatics Laboratories, NTT Corporation
  • Kuniko Saito NTT Human Informatics Laboratories, NTT Corporation
  • Jun Suzuki Tohoku University

DOI:

https://doi.org/10.1609/aaai.v38i17.29874

Keywords:

NLP: Question Answering, CV: Language and Vision

Abstract

We study the problem of completing various visual document understanding (VDU) tasks, e.g., question answering and information extraction, on real-world documents through human-written instructions. To this end, we propose InstructDoc, the first large-scale collection of 30 publicly available VDU datasets, each with diverse instructions in a unified format, which covers a wide range of 12 tasks and includes open document types/formats. Furthermore, to enhance the generalization performance on VDU tasks, we design a new instruction-based document reading and understanding model, InstructDr, that connects document images, image encoders, and large language models (LLMs) through a trainable bridging module. Experiments demonstrate that InstructDr can effectively adapt to new VDU datasets, tasks, and domains via given instructions and outperforms existing multimodal LLMs and ChatGPT without specific training.

Published

2024-03-24

How to Cite

Tanaka, R., Iki, T., Nishida, K., Saito, K., & Suzuki, J. (2024). InstructDoc: A Dataset for Zero-Shot Generalization of Visual Document Understanding with Instructions. Proceedings of the AAAI Conference on Artificial Intelligence, 38(17), 19071–19079. https://doi.org/10.1609/aaai.v38i17.29874

Issue

Section

AAAI Technical Track on Natural Language Processing II