MultiBooth: Towards Generating All Your Concepts in an Image from Text
DOI:
https://doi.org/10.1609/aaai.v39i10.33187Abstract
This paper introduces MultiBooth, a method that generates images from texts containing various concepts from users.Despite diffusion models bringing significant advancements for customized text-to-image generation, existing methods often struggle with multi-concept scenarios due to low concept fidelity and high inference cost. MultiBooth addresses these issues by dividing the multi-concept generation process into two phases: a single-concept learning phase and a multi-concept integration phase. During the single-concept learning phase, we employ a multi-modal image encoder and an efficient concept encoding technique to learn a concise and discriminative representation for each concept. In the multi-concept integration phase, we use bounding boxes to define the generation area for each concept within the cross-attention map. This method enables the creation of individual concepts within their specified regions, thereby facilitating the formation of multi-concept images. This strategy not only improves concept fidelity but also reduces additional inference cost. MultiBooth surpasses various baselines in both qualitative and quantitative evaluations, showcasing its superior performance and computational efficiency.Downloads
Published
2025-04-11
How to Cite
Zhu, C., Li, K., Ma, Y., He, C., & Li, X. (2025). MultiBooth: Towards Generating All Your Concepts in an Image from Text. Proceedings of the AAAI Conference on Artificial Intelligence, 39(10), 10923-10931. https://doi.org/10.1609/aaai.v39i10.33187
Issue
Section
AAAI Technical Track on Computer Vision IX