Compositional Inversion for Stable Diffusion Models

Authors

  • Xulu Zhang Department of Computing, the Hong Kong Polytechnic University, Hong Kong Center for Artificial Intelligence and Robotics, HKISI, CAS, Hong Kong
  • Xiao-Yong Wei College of Computer Science, Sichuan University, Chengdu, China Department of Computing, the Hong Kong Polytechnic University, Hong Kong
  • Jinlin Wu Center for Artificial Intelligence and Robotics, HKISI, CAS, Hong Kong State Key Laboratory of Multimodal Artificial Intelligence Systems, CASIA, Beijing, China
  • Tianyi Zhang Department of Computing, the Hong Kong Polytechnic University, Hong Kong
  • Zhaoxiang Zhang Center for Artificial Intelligence and Robotics, HKISI, CAS, Hong Kong State Key Laboratory of Multimodal Artificial Intelligence Systems, CASIA, Beijing, China School of Artificial Intelligence, UCAS, Beijing, China
  • Zhen Lei Center for Artificial Intelligence and Robotics, HKISI, CAS, Hong Kong State Key Laboratory of Multimodal Artificial Intelligence Systems, CASIA, Beijing, China School of Artificial Intelligence, UCAS, Beijing, China
  • Qing Li Department of Computing, the Hong Kong Polytechnic University, Hong Kong

DOI:

https://doi.org/10.1609/aaai.v38i7.28565

Keywords:

CV: Computational Photography, Image & Video Synthesis

Abstract

Inversion methods, such as Textual Inversion, generate personalized images by incorporating concepts of interest provided by user images. However, existing methods often suffer from overfitting issues, where the dominant presence of inverted concepts leads to the absence of other desired concepts. It stems from the fact that during inversion, the irrelevant semantics in the user images are also encoded, forcing the inverted concepts to occupy locations far from the core distribution in the embedding space. To address this issue, we propose a method that guides the inversion process towards the core distribution for compositional embeddings. Additionally, we introduce a spatial regularization approach to balance the attention on the concepts being composed. Our method is designed as a post-training approach and can be seamlessly integrated with other inversion methods. Experimental results demonstrate the effectiveness of our proposed approach in mitigating the overfitting problem and generating more diverse and balanced compositions of concepts in the synthesized images. The source code is available at https://github.com/zhangxulu1996/Compositional-Inversion.

Published

2024-03-24

How to Cite

Zhang, X., Wei, X.-Y., Wu, J., Zhang, T., Zhang, Z., Lei, Z., & Li, Q. (2024). Compositional Inversion for Stable Diffusion Models. Proceedings of the AAAI Conference on Artificial Intelligence, 38(7), 7350-7358. https://doi.org/10.1609/aaai.v38i7.28565

Issue

Section

AAAI Technical Track on Computer Vision VI