Towards Robust Visual Information Extraction in Real World: New Dataset and Novel Solution

Authors

  • Jiapeng Wang South China University of Technology
  • Chongyu Liu South China University of Tenology
  • Lianwen Jin South China University of Technology SCUT-Zhuhai Institute of Modern Industrial Innovation
  • Guozhi Tang South China University of Technology
  • Jiaxin Zhang South China University of Technology
  • Shuaitao Zhang South China University of Technology
  • Qianying Wang Lenovo Research
  • Yaqiang Wu Lenovo Research Xi’an Jiaotong University
  • Mingxiang Cai Lenovo Research

DOI:

https://doi.org/10.1609/aaai.v35i4.16378

Keywords:

Language and Vision

Abstract

Visual Information Extraction (VIE) has attracted considerable attention recently owing to its various advanced applications such as document understanding, automatic marking and intelligent education. Most existing works decoupled this problem into several independent sub-tasks of text spotting (text detection and recognition) and information extraction, which completely ignored the high correlation among them during optimization. In this paper, we propose a robust Visual Information Extraction System (VIES) towards real-world scenarios, which is an unified end-to-end trainable framework for simultaneous text detection, recognition and information extraction by taking a single document image as input and outputting the structured information. Specifically, the information extraction branch collects abundant visual and semantic representations from text spotting for multimodal feature fusion and conversely, provides higher-level semantic clues to contribute to the optimization of text spotting. Moreover, regarding the shortage of public benchmarks, we construct a fully-annotated dataset called EPHOIE (https://github.com/HCIILAB/EPHOIE), which is the first Chinese benchmark for both text spotting and visual information extraction. EPHOIE consists of 1,494 images of examination paper head with complex layouts and background, including a total of 15,771 Chinese handwritten or printed text instances. Compared with the state-of-the-art methods, our VIES shows significant superior performance on the EPHOIE dataset and achieves a 9.01% F-score gain on the widely used SROIE dataset under the end-to-end scenario.

Downloads

Published

2021-05-18

How to Cite

Wang, J., Liu, C., Jin, L. ., Tang, G., Zhang, J., Zhang, S., Wang, Q., Wu, Y., & Cai, M. (2021). Towards Robust Visual Information Extraction in Real World: New Dataset and Novel Solution. Proceedings of the AAAI Conference on Artificial Intelligence, 35(4), 2738-2745. https://doi.org/10.1609/aaai.v35i4.16378

Issue

Section

AAAI Technical Track on Computer Vision III