A Question-Answering Approach to Key Value Pair Extraction from Form-Like Document Images

Authors

  • Kai Hu University of Science and Technology of China Microsoft Research Asia
  • Zhuoyuan Wu Peking University Shenzhen Graduate School Microsoft Research Asia
  • Zhuoyao Zhong Microsoft Research Asia
  • Weihong Lin Microsoft Research Asia
  • Lei Sun Microsoft Research Asia
  • Qiang Huo Microsoft Research Asia

DOI:

https://doi.org/10.1609/aaai.v37i11.26516

Keywords:

SNLP: Information Extraction

Abstract

In this paper, we present a new question-answering (QA) based key-value pair extraction approach, called KVPFormer, to robustly extracting key-value relationships between entities from form-like document images. Specifically, KVPFormer first identifies key entities from all entities in an image with a Transformer encoder, then takes these key entities as questions and feeds them into a Transformer decoder to predict their corresponding answers (i.e., value entities) in parallel. To achieve higher answer prediction accuracy, we propose a coarse-to-fine answer prediction approach further, which first extracts multiple answer candidates for each identified question in the coarse stage and then selects the most likely one among these candidates in the fine stage. In this way, the learning difficulty of answer prediction can be effectively reduced so that the prediction accuracy can be improved. Moreover, we introduce a spatial compatibility attention bias into the self-attention/cross-attention mechanism for KVPFormer to better model the spatial interactions between entities. With these new techniques, our proposed KVPFormer achieves state-of-the-art results on FUNSD and XFUND datasets, outperforming the previous best-performing method by 7.2% and 13.2% in F1 score, respectively.

Downloads

Published

2023-06-26

How to Cite

Hu, K., Wu, Z., Zhong, Z., Lin, W., Sun, L., & Huo, Q. (2023). A Question-Answering Approach to Key Value Pair Extraction from Form-Like Document Images. Proceedings of the AAAI Conference on Artificial Intelligence, 37(11), 12899-12906. https://doi.org/10.1609/aaai.v37i11.26516

Issue

Section

AAAI Technical Track on Speech & Natural Language Processing