BROS: A Pre-trained Language Model Focusing on Text and Layout for Better Key Information Extraction from Documents

Authors

  • Teakgyu Hong NAVER CLOVA
  • DongHyun Kim NAVER CLOVA
  • Mingi Ji KAIST
  • Wonseok Hwang LBox
  • Daehyun Nam Upstage AI Research, Upstage AI
  • Sungrae Park Upstage AI Research, Upstage AI

DOI:

https://doi.org/10.1609/aaai.v36i10.21322

Keywords:

Speech & Natural Language Processing (SNLP)

Abstract

Key information extraction (KIE) from document images requires understanding the contextual and spatial semantics of texts in two-dimensional (2D) space. Many recent studies try to solve the task by developing pre-trained language models focusing on combining visual features from document images with texts and their layout. On the other hand, this paper tackles the problem by going back to the basic: effective combination of text and layout. Specifically, we propose a pre-trained language model, named BROS (BERT Relying On Spatiality), that encodes relative positions of texts in 2D space and learns from unlabeled documents with area-masking strategy. With this optimized training scheme for understanding texts in 2D space, BROS shows comparable or better performance compared to previous methods on four KIE benchmarks (FUNSD, SROIE*, CORD, and SciTSR) without relying on visual features. This paper also reveals two real-world challenges in KIE tasks--(1) minimizing the error from incorrect text ordering and (2) efficient learning from fewer downstream examples--and demonstrates the superiority of BROS over previous methods.

Downloads

Published

2022-06-28

How to Cite

Hong, T., Kim, D., Ji, M., Hwang, W., Nam, D., & Park, S. (2022). BROS: A Pre-trained Language Model Focusing on Text and Layout for Better Key Information Extraction from Documents. Proceedings of the AAAI Conference on Artificial Intelligence, 36(10), 10767-10775. https://doi.org/10.1609/aaai.v36i10.21322

Issue

Section

AAAI Technical Track on Speech and Natural Language Processing