AGFSync: Leveraging AI-Generated Feedback for Preference Optimization in Text-to-Image Generation

Authors

  • Jingkun An Beihang University
  • Yinghao Zhu Beihang University
  • Zongjian Li Peking University
  • Enshen Zhou Beihang University
  • Haoran Feng Tsinghua University
  • Xijie Huang Beihang University
  • Bohua Chen Huazhong University of Science and Technolog
  • Yemin Shi Peking University
  • Chengwei Pan Beihang University Zhongguancun Laboratory

DOI:

https://doi.org/10.1609/aaai.v39i2.32168

Abstract

Text-to-Image (T2I) diffusion models have achieved remarkable success in image generation. Despite their progress, challenges remain in both prompt-following ability, image quality and lack of high-quality datasets, which are essential for refining these models. As acquiring labeled data is costly, we introduce AGFSync, a framework that enhances T2I diffusion models through Direct Preference Optimization (DPO) in a fully AI-driven approach. AGFSync utilizes Vision-Language Models (VLM) to assess image quality across style, coherence, and aesthetics, generating feedback data within an AI-driven loop. By applying AGFSync to leading T2I models such as SD v1.4, v1.5, and SDXL-base, our extensive experiments on the TIFA dataset demonstrate notable improvements in VQA scores, aesthetic evaluations, and performance on the HPS v2 benchmark, consistently outperforming the base models. AGFSync's method of refining T2I diffusion models paves the way for scalable alignment techniques.

Published

2025-04-11

How to Cite

An, J., Zhu, Y., Li, Z., Zhou, E., Feng, H., Huang, X., Chen, B., Shi, Y., & Pan, C. (2025). AGFSync: Leveraging AI-Generated Feedback for Preference Optimization in Text-to-Image Generation. Proceedings of the AAAI Conference on Artificial Intelligence, 39(2), 1746-1754. https://doi.org/10.1609/aaai.v39i2.32168

Issue

Section

AAAI Technical Track on Computer Vision I