Exploring Stroke-Level Modifications for Scene Text Editing

Authors

  • Yadong Qu University of Science and Technology of China
  • Qingfeng Tan Cyberspace Institute of Advanced Technology, GuangZhou University, GuangZhou, China
  • Hongtao Xie University of Science and Technology of China
  • Jianjun Xu University of science and technology of China
  • YuXin Wang University of Science and Technology of China
  • Yongdong Zhang University of Science and Technology of China

DOI:

https://doi.org/10.1609/aaai.v37i2.25305

Keywords:

CV: Computational Photography, Image & Video Synthesis, CV: Learning & Optimization for CV, CV: Multi-modal Vision, ML: Deep Neural Architectures

Abstract

Scene text editing (STE) aims to replace text with the desired one while preserving background and styles of the original text. However, due to the complicated background textures and various text styles, existing methods fall short in generating clear and legible edited text images. In this study, we attribute the poor editing performance to two problems: 1) Implicit decoupling structure. Previous methods of editing the whole image have to learn different translation rules of background and text regions simultaneously. 2) Domain gap. Due to the lack of edited real scene text images, the network can only be well trained on synthetic pairs and performs poorly on real-world images. To handle the above problems, we propose a novel network by MOdifying Scene Text image at strokE Level (MOSTEL). Firstly, we generate stroke guidance maps to explicitly indicate regions to be edited. Different from the implicit one by directly modifying all the pixels at image level, such explicit instructions filter out the distractions from background and guide the network to focus on editing rules of text regions. Secondly, we propose a Semi-supervised Hybrid Learning to train the network with both labeled synthetic images and unpaired real scene text images. Thus, the STE model is adapted to real-world datasets distributions. Moreover, two new datasets (Tamper-Syn2k and Tamper-Scene) are proposed to fill the blank of public evaluation datasets. Extensive experiments demonstrate that our MOSTEL outperforms previous methods both qualitatively and quantitatively. Datasets and code will be available at https://github.com/qqqyd/MOSTEL.

Downloads

Published

2023-06-26

How to Cite

Qu, Y., Tan, Q., Xie, H., Xu, J., Wang, Y., & Zhang, Y. (2023). Exploring Stroke-Level Modifications for Scene Text Editing. Proceedings of the AAAI Conference on Artificial Intelligence, 37(2), 2119-2127. https://doi.org/10.1609/aaai.v37i2.25305

Issue

Section

AAAI Technical Track on Computer Vision II