Brush Your Text: Synthesize Any Scene Text on Images via Diffusion Model

Authors

  • Lingjun Zhang East China Normal University Shanghai AI Laboratory
  • Xinyuan Chen Shanghai AI Laboratory
  • Yaohui Wang Shanghai AI Laboratory
  • Yue Lu East China Normal University
  • Yu Qiao Shanghai AI Laboratory

DOI:

https://doi.org/10.1609/aaai.v38i7.28550

Keywords:

CV: Computational Photography, Image & Video Synthesis, CV: Applications, CV: Language and Vision

Abstract

Recently, diffusion-based image generation methods are credited for their remarkable text-to-image generation capabilities, while still facing challenges in accurately generating multilingual scene text images. To tackle this problem, we propose Diff-Text, which is a training-free scene text generation framework for any language. Our model outputs a photo-realistic image given a text of any language along with a textual description of a scene. The model leverages rendered sketch images as priors, thus arousing the potential multilingual-generation ability of the pre-trained Stable Diffusion. Based on the observation from the influence of the cross-attention map on object placement in generated images, we propose a localized attention constraint into the cross-attention layer to address the unreasonable positioning problem of scene text. Additionally, we introduce contrastive image-level prompts to further refine the position of the textual region and achieve more accurate scene text generation. Experiments demonstrate that our method outperforms the existing method in both the accuracy of text recognition and the naturalness of foreground-background blending.

Published

2024-03-24

How to Cite

Zhang, L., Chen, X., Wang, Y., Lu, Y., & Qiao, Y. (2024). Brush Your Text: Synthesize Any Scene Text on Images via Diffusion Model. Proceedings of the AAAI Conference on Artificial Intelligence, 38(7), 7215-7223. https://doi.org/10.1609/aaai.v38i7.28550

Issue

Section

AAAI Technical Track on Computer Vision VI