A User-Friendly Framework for Generating Model-Preferred Prompts in Text-to-Image Synthesis

Authors

  • Nailei Hei Fudan University
  • Qianyu Guo Fudan University
  • Zihao Wang Tongji University
  • Yan Wang Fudan University
  • Haofen Wang Tongji University
  • Wenqiang Zhang Fudan University

DOI:

https://doi.org/10.1609/aaai.v38i3.27986

Keywords:

CV: Language and Vision, CV: Computational Photography, Image & Video Synthesis

Abstract

Well-designed prompts have demonstrated the potential to guide text-to-image models in generating amazing images. Although existing prompt engineering methods can provide high-level guidance, it is challenging for novice users to achieve the desired results by manually entering prompts due to a discrepancy between novice-user-input prompts and the model-preferred prompts. To bridge the distribution gap between user input behavior and model training datasets, we first construct a novel Coarse-Fine Granularity Prompts dataset (CFP) and propose a novel User-Friendly Fine-Grained Text Generation framework (UF-FGTG) for automated prompt optimization. For CFP, we construct a novel dataset for text-to-image tasks that combines coarse and fine-grained prompts to facilitate the development of automated prompt generation methods. For UF-FGTG, we propose a novel framework that automatically translates user-input prompts into model-preferred prompts. Specifically, we propose a prompt refiner that continually rewrites prompts to empower users to select results that align with their unique needs. Meanwhile, we integrate image-related loss functions from the text-to-image model into the training process of text generation to generate model-preferred prompts. Additionally, we propose an adaptive feature extraction module to ensure diversity in the generated results. Experiments demonstrate that our approach is capable of generating more visually appealing and diverse images than previous state-of-the-art methods, achieving an average improvement of 5% across six quality and aesthetic metrics. Data and code are available at https://github.com/Naylenv/UF-FGTG.

Published

2024-03-24

How to Cite

Hei, N., Guo, Q., Wang, Z., Wang, Y., Wang, H., & Zhang, W. (2024). A User-Friendly Framework for Generating Model-Preferred Prompts in Text-to-Image Synthesis. Proceedings of the AAAI Conference on Artificial Intelligence, 38(3), 2139-2147. https://doi.org/10.1609/aaai.v38i3.27986

Issue

Section

AAAI Technical Track on Computer Vision II