DOS: Directional Object Separation in Text Embeddings for Multi-Object Image Generation

Authors

  • Dongnam Byun Seoul National University
  • Jungwon Park Seoul National University
  • Jungmin Ko Seoul National University
  • Changin Choi Seoul National University
  • Wonjong Rhee Seoul National University

DOI:

https://doi.org/10.1609/aaai.v40i4.37235

Abstract

Recent progress in text-to-image (T2I) generative models has led to significant improvements in generating high-quality images aligned with text prompts. However, these models still struggle with prompts involving multiple objects, often resulting in object neglect or object mixing. Through extensive studies, we identify four problematic scenarios, Similar Shapes, Similar Textures, Dissimilar Background Biases, and Many Objects, where inter-object relationships frequently lead to such failures. Motivated by two key observations about CLIP embeddings, we propose DOS (Directional Object Separation), a method that modifies three types of CLIP text embeddings before passing them into text-to-image models. Experimental results show that DOS consistently improves the success rate of multi-object image generation and reduces object mixing. In human evaluations, DOS significantly outperforms four competing methods, receiving 26.24%-43.04% more votes across four benchmarks. These results highlight DOS as a practical and effective solution for improving multi-object image generation.

Published

2026-03-14

How to Cite

Byun, D., Park, J., Ko, J., Choi, C., & Rhee, W. (2026). DOS: Directional Object Separation in Text Embeddings for Multi-Object Image Generation. Proceedings of the AAAI Conference on Artificial Intelligence, 40(4), 2490-2497. https://doi.org/10.1609/aaai.v40i4.37235

Issue

Section

AAAI Technical Track on Computer Vision I