Visual Semantics Allow for Textual Reasoning Better in Scene Text Recognition

Authors

  • Yue He National Engineering Research Center for Multimedia Software, Institute of Artificial Intelligence, School of Computer Science, and Hubei Key Laboratory of Multimedia and Network Communication Engineering, Wuhan University
  • Chen Chen School of Computer Science, Faculty of Engineering, The University of Sydney
  • Jing Zhang School of Computer Science, Faculty of Engineering, The University of Sydney
  • Juhua Liu School of Printing and Packaging, and Institute of Artificial Intelligence, Wuhan University
  • Fengxiang He JD Explore Academy
  • Chaoyue Wang School of Computer Science, Faculty of Engineering, The University of Sydney
  • Bo Du National Engineering Research Center for Multimedia Software, Institute of Artificial Intelligence, School of Computer Science, and Hubei Key Laboratory of Multimedia and Network Communication Engineering, Wuhan University

DOI:

https://doi.org/10.1609/aaai.v36i1.19971

Keywords:

Computer Vision (CV)

Abstract

Existing Scene Text Recognition (STR) methods typically use a language model to optimize the joint probability of the 1D character sequence predicted by a visual recognition (VR) model, which ignore the 2D spatial context of visual semantics within and between character instances, making them not generalize well to arbitrary shape scene text. To address this issue, we make the first attempt to perform textual reasoning based on visual semantics in this paper. Technically, given the character segmentation maps predicted by a VR model, we construct a subgraph for each instance, where nodes represent the pixels in it and edges are added between nodes based on their spatial similarity. Then, these subgraphs are sequentially connected by their root nodes and merged into a complete graph. Based on this graph, we devise a graph convolutional network for textual reasoning (GTR) by supervising it with a cross-entropy loss. GTR can be easily plugged in representative STR models to improve their performance owing to better textual reasoning. Specifically, we construct our model, namely S-GTR, by paralleling GTR to the language model in a segmentation-based STR baseline, which can effectively exploit the visual-linguistic complementarity via mutual learning. S-GTR sets new state-of-the-art on six challenging STR benchmarks and generalizes well to multi-linguistic datasets. Code is available at https://github.com/adeline-cs/GTR.

Downloads

Published

2022-06-28

How to Cite

He, Y., Chen, C., Zhang, J., Liu, J., He, F., Wang, C., & Du, B. (2022). Visual Semantics Allow for Textual Reasoning Better in Scene Text Recognition. Proceedings of the AAAI Conference on Artificial Intelligence, 36(1), 888-896. https://doi.org/10.1609/aaai.v36i1.19971

Issue

Section

AAAI Technical Track on Computer Vision I