Directed Diffusion: Direct Control of Object Placement through Attention Guidance

Authors

  • Wan-Duo Kurt Ma Victoria University of Wellington
  • Avisek Lahiri Google Research
  • J. P. Lewis NVIDIA Research
  • Thomas Leung Google Research
  • W. Bastiaan Kleijn Victoria University of Wellington Google Research

DOI:

https://doi.org/10.1609/aaai.v38i5.28204

Keywords:

CV: Computational Photography, Image & Video Synthesis, CV: Applications, CV: Large Vision Models

Abstract

Text-guided diffusion models such as DALLE-2, Imagen, and Stable Diffusion are able to generate an effectively endless variety of images given only a short text prompt describing the desired image content. In many cases the images are of very high quality. However, these models often struggle to compose scenes containing several key objects such as characters in specified positional relationships. The missing capability to ``direct'' the placement of characters and objects both within and across images is crucial in storytelling, as recognized in the literature on film and animation theory. In this work, we take a particularly straightforward approach to providing the needed direction. Drawing on the observation that the cross-attention maps for prompt words reflect the spatial layout of objects denoted by those words, we introduce an optimization objective that produces ``activation'' at desired positions in these cross-attention maps. The resulting approach is a step toward generalizing the applicability of text-guided diffusion models beyond single images to collections of related images, as in storybooks. Directed Diffusion provides easy high-level positional control over multiple objects, while making use of an existing pre-trained model and maintaining a coherent blend between the positioned objects and the background. Moreover, it requires only a few lines to implement.

Published

2024-03-24

How to Cite

Ma, W.-D. K., Lahiri, A., Lewis, J. P., Leung, T., & Kleijn, W. B. (2024). Directed Diffusion: Direct Control of Object Placement through Attention Guidance. Proceedings of the AAAI Conference on Artificial Intelligence, 38(5), 4098-4106. https://doi.org/10.1609/aaai.v38i5.28204

Issue

Section

AAAI Technical Track on Computer Vision IV