Extracting Zero-shot Structured Information from Form-like Documents: Pretraining with Keys and Triggers

Authors

  • Rongyu Cao Key Lab of Intelligent Information Processing of Chinese Academy of Sciences University of Chinese Academy of Sciences
  • Ping Luo Key Lab of Intelligent Information Processing of Chinese Academy of Sciences University of Chinese Academy of Sciences Peng Cheng Laboratory

Keywords:

Information Extraction

Abstract

In this paper, we revisit the problem of extracting the values of a given set of key fields from form-like documents. It is the vital step to support many downstream applications, such as knowledge base construction, question answering, document comprehension and so on. Previous studies ignore the semantics of the given keys by considering them only as the class labels, and thus might be incapable to handle zero-shot keys. Meanwhile, although these models often leverage the attention mechanism, the learned features might not reflect the true proxy of explanations on why humans would recognize the value for the key, and thus could not well generalize to new documents. To address these issues, we propose a Key-Aware and Trigger-Aware (KATA) extraction model. With the input key, it explicitly learns two mappings, namely from key representations to trigger representations and then from trigger representations to values. These two mappings might be intrinsic and invariant across different keys and documents. With a large training set automatically constructed based on the Wikipedia data, we pre-train these two mappings. Experiments with the fine-tuning step to two applications show that the proposed model achieves more than 70% accuracy for the extraction of zero-shot keys while previous methods all fail.

Downloads

Published

2021-05-18

How to Cite

Cao, R., & Luo, P. (2021). Extracting Zero-shot Structured Information from Form-like Documents: Pretraining with Keys and Triggers. Proceedings of the AAAI Conference on Artificial Intelligence, 35(14), 12612-12620. Retrieved from https://ojs.aaai.org/index.php/AAAI/article/view/17494

Issue

Section

AAAI Technical Track on Speech and Natural Language Processing I