Fair Text-to-Image Diffusion via Fair Mapping

Authors

  • Jia Li Provable Responsible AI and Data Analytics (PRADA) Lab King Abdullah University of Science and Technology Institute of Information Engineering, Chinese Academy of Sciences
  • Lijie Hu Provable Responsible AI and Data Analytics (PRADA) Lab King Abdullah University of Science and Technology SDAIA-KAUST
  • Jingfeng Zhang King Abdullah University of Science and Technology University of Auckland
  • Tianhang Zheng The State Key Laboratory of Blockchain and Data Security, Zhejiang University Hangzhou High-Tech Zone (Binjiang) Institute of Blockchain and Data Security
  • Hua Zhang Institute of Information Engineering, Chinese Academy of Sciences
  • Di Wang Provable Responsible AI and Data Analytics (PRADA) Lab King Abdullah University of Science and Technology SDAIA-KAUST

DOI:

https://doi.org/10.1609/aaai.v39i25.34823

Abstract

In this paper, we address the limitations of existing text-to-image diffusion models in generating demographically fair results when given human-related descriptions. These models often struggle to disentangle the target language context from sociocultural biases, resulting in biased image generation. To overcome this challenge, we propose Fair Mapping, a flexible, model-agnostic, and lightweight approach that modifies a pre-trained text-to-image diffusion model by controlling the prompt to achieve fair image generation. One key advantage of our approach is its high efficiency. It only requires updating an additional linear network with few parameters at a low computational cost. By developing a linear network that maps conditioning embeddings into a debiased space, we enable the generation of relatively balanced demographic results based on the specified text condition. With comprehensive experiments on face image generation, we show that our method significantly improves image generation fairness with almost the same image quality compared to conventional diffusion models when prompted with descriptions related to humans. By effectively addressing the issue of implicit language bias, our method produces more fair and diverse image outputs.

Downloads

Published

2025-04-11

How to Cite

Li, J., Hu, L., Zhang, J., Zheng, T., Zhang, H., & Wang, D. (2025). Fair Text-to-Image Diffusion via Fair Mapping. Proceedings of the AAAI Conference on Artificial Intelligence, 39(25), 26256–26264. https://doi.org/10.1609/aaai.v39i25.34823

Issue

Section

AAAI Technical Track on Philosophy and Ethics of AI