DPText-DETR: Towards Better Scene Text Detection with Dynamic Points in Transformer

Authors

  • Maoyuan Ye Research Center for Graphic Communication, Printing and Packaging, Institute of Artificial Intelligence, Wuhan University
  • Jing Zhang The University of Sydney
  • Shanshan Zhao JD Explore Academy
  • Juhua Liu Research Center for Graphic Communication, Printing and Packaging, Institute of Artificial Intelligence, Wuhan University
  • Bo Du National Engineering Research Center for Multimedia Software, Institute of Artificial Intelligence, School of Computer Science and Hubei Key Laboratory of Multimedia and Network Communication Engineering, Wuhan University
  • Dacheng Tao JD Explore Academy The University of Sydney

DOI:

https://doi.org/10.1609/aaai.v37i3.25430

Keywords:

CV: Scene Analysis & Understanding, CV: Applications, CV: Object Detection & Categorization

Abstract

Recently, Transformer-based methods, which predict polygon points or Bezier curve control points for localizing texts, are popular in scene text detection. However, these methods built upon detection transformer framework might achieve sub-optimal training efficiency and performance due to coarse positional query modeling. In addition, the point label form exploited in previous works implies the reading order of humans, which impedes the detection robustness from our observation. To address these challenges, this paper proposes a concise Dynamic Point Text DEtection TRansformer network, termed DPText-DETR. In detail, DPText-DETR directly leverages explicit point coordinates to generate position queries and dynamically updates them in a progressive way. Moreover, to improve the spatial inductive bias of non-local self-attention in Transformer, we present an Enhanced Factorized Self-Attention module which provides point queries within each instance with circular shape guidance. Furthermore, we design a simple yet effective positional label form to tackle the side effect of the previous form. To further evaluate the impact of different label forms on the detection robustness in real-world scenario, we establish an Inverse-Text test set containing 500 manually labeled images. Extensive experiments prove the high training efficiency, robustness, and state-of-the-art performance of our method on popular benchmarks. The code and the Inverse-Text test set are available at https://github.com/ymy-k/DPText-DETR.

Downloads

Published

2023-06-26

How to Cite

Ye, M., Zhang, J., Zhao, S., Liu, J., Du, B., & Tao, D. (2023). DPText-DETR: Towards Better Scene Text Detection with Dynamic Points in Transformer. Proceedings of the AAAI Conference on Artificial Intelligence, 37(3), 3241-3249. https://doi.org/10.1609/aaai.v37i3.25430

Issue

Section

AAAI Technical Track on Computer Vision III