Generative Models for Art and Society

Authors

  • Yankun Wu Osaka University

DOI:

https://doi.org/10.1609/aies.v7i2.31911

Abstract

Text-to-image models have demonstrated remarkable capabilities in producing high-fidelity images from natural language prompts. The widespread application and increasing accessibility of pioneering models, such as Stable Diffusion, have gained significant attention regarding the impact of generated images on representations in downstream tasks. Concurrently, ethical considerations on text-to-image generation have emerged especially regarding gender bias. This paper presents three projects that explore generative models on their capabilities and bias. The first project leverages Stable Diffusion to disentangle content and style in art paintings, paving the way for applying the generative model to digital humanities. The second project evaluates gender bias in text-to-image generation, analyzing its origins and manifestations in generated images. The third project presents a survey on societal bias evaluation in generative models, targeting to synthesize current research and provide insights into future directions. Through these projects, we aim to contribute to the growing body of knowledge on the applications and potential societal impacts of text-to-image generation, fostering a more nuanced understanding of their capabilities and limitations.

Downloads

Published

2025-01-22

How to Cite

Wu, Y. (2025). Generative Models for Art and Society. Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, 7(2), 58-60. https://doi.org/10.1609/aies.v7i2.31911