SSMG: Spatial-Semantic Map Guided Diffusion Model for Free-Form Layout-to-Image Generation

Authors

  • Chengyou Jia School of Computer Science and Technology, MOEKLINNS Lab, Xi’an Jiaotong University
  • Minnan Luo School of Computer Science and Technology, MOEKLINNS Lab, Xi’an Jiaotong University
  • Zhuohang Dang School of Computer Science and Technology, MOEKLINNS Lab, Xi’an Jiaotong University
  • Guang Dai SGIT AI Lab State Grid Corporation of China
  • Xiaojun Chang University of Technology Sydney Mohamed bin Zayed University of Artificial Intelligence (MBZUAI)
  • Mengmeng Wang Zhejiang University SGIT AI Lab
  • Jingdong Wang Baidu Inc

DOI:

https://doi.org/10.1609/aaai.v38i3.28024

Keywords:

CV: Computational Photography, Image & Video Synthesis, CV: Applications, CV: Language and Vision, CV: Scene Analysis & Understanding

Abstract

Despite significant progress in Text-to-Image (T2I) generative models, even lengthy and complex text descriptions still struggle to convey detailed controls. In contrast, Layout-to-Image (L2I) generation, aiming to generate realistic and complex scene images from user-specified layouts, has risen to prominence. However, existing methods transform layout information into tokens or RGB images for conditional control in the generative process, leading to insufficient spatial and semantic controllability of individual instances. To address these limitations, we propose a novel Spatial-Semantic Map Guided (SSMG) diffusion model that adopts the feature map, derived from the layout, as guidance. Owing to rich spatial and semantic information encapsulated in well-designed feature maps, SSMG achieves superior generation quality with sufficient spatial and semantic controllability compared to previous works. Additionally, we propose the Relation-Sensitive Attention (RSA) and Location-Sensitive Attention (LSA) mechanisms. The former aims to model the relationships among multiple objects within scenes while the latter is designed to heighten the model's sensitivity to the spatial information embedded in the guidance. Extensive experiments demonstrate that SSMG achieves highly promising results, setting a new state-of-the-art across a range of metrics encompassing fidelity, diversity, and controllability.

Published

2024-03-24

How to Cite

Jia, C., Luo, M., Dang, Z., Dai, G., Chang, X., Wang, M., & Wang, J. (2024). SSMG: Spatial-Semantic Map Guided Diffusion Model for Free-Form Layout-to-Image Generation. Proceedings of the AAAI Conference on Artificial Intelligence, 38(3), 2480-2488. https://doi.org/10.1609/aaai.v38i3.28024

Issue

Section

AAAI Technical Track on Computer Vision II