Multi-Architecture Multi-Expert Diffusion Models

Authors

  • Yunsung Lee Wrtn Technologies
  • JinYoung Kim Twelvelabs
  • Hyojun Go Twelvelabs
  • Myeongho Jeong Yanolja
  • Shinhyeok Oh Riiid AI Research
  • Seungtaek Choi Yanolja

DOI:

https://doi.org/10.1609/aaai.v38i12.29245

Keywords:

ML: Deep Generative Models & Autoencoders, CV: Computational Photography, Image & Video Synthesis

Abstract

In this paper, we address the performance degradation of efficient diffusion models by introducing Multi-architecturE Multi-Expert diffusion models (MEME). We identify the need for tailored operations at different time-steps in diffusion processes and leverage this insight to create compact yet high-performing models. MEME assigns distinct architectures to different time-step intervals, balancing convolution and self-attention operations based on observed frequency characteristics. We also introduce a soft interval assignment strategy for comprehensive training. Empirically, MEME operates 3.3 times faster than baselines while improving image generation quality (FID scores) by 0.62 (FFHQ) and 0.37 (CelebA). Though we validate the effectiveness of assigning more optimal architecture per time-step, where efficient models outperform the larger models, we argue that MEME opens a new design choice for diffusion models that can be easily applied in other scenarios, such as large multi-expert models.

Published

2024-03-24

How to Cite

Lee, Y., Kim, J., Go, H., Jeong, M., Oh, S., & Choi, S. (2024). Multi-Architecture Multi-Expert Diffusion Models. Proceedings of the AAAI Conference on Artificial Intelligence, 38(12), 13427-13436. https://doi.org/10.1609/aaai.v38i12.29245

Issue

Section

AAAI Technical Track on Machine Learning III