AltDiffusion: A Multilingual Text-to-Image Diffusion Model
DOI:
https://doi.org/10.1609/aaai.v38i7.28487Keywords:
CV: Language and Vision, NLP: Machine Translation, Multilinguality, Cross-Lingual NLPAbstract
Large Text-to-Image(T2I) diffusion models have shown a remarkable capability to produce photorealistic and diverse images based on text inputs. However, existing works only support limited language input, e.g., English, Chinese, and Japanese, leaving users beyond these languages underserved and blocking the global expansion of T2I models. Therefore, this paper presents AltDiffusion, a novel multilingual T2I diffusion model that supports eighteen different languages. Specifically, we first train a multilingual text encoder based on the knowledge distillation. Then we plug it into a pretrained English-only diffusion model and train the model with a two-stage schema to enhance the multilingual capability, including concept alignment and quality improvement stage on a large-scale multilingual dataset. Furthermore, we introduce a new benchmark, which includes Multilingual-General-18(MG-18) and Multilingual-Cultural-18(MC-18) datasets, to evaluate the capabilities of T2I diffusion models for generating high-quality images and capturing culture-specific concepts in different languages. Experimental results on both MG-18 and MC-18 demonstrate that AltDiffusion outperforms current state-of-the-art T2I models, e.g., Stable Diffusion in multilingual understanding, especially with respect to culture-specific concepts, while still having comparable capability for generating high-quality images. All source code and checkpoints could be found in https://github.com/superhero-7/AltDiffuson.Downloads
Published
2024-03-24
How to Cite
Ye, F., Liu, G., Wu, X., & Wu, L. (2024). AltDiffusion: A Multilingual Text-to-Image Diffusion Model. Proceedings of the AAAI Conference on Artificial Intelligence, 38(7), 6648-6656. https://doi.org/10.1609/aaai.v38i7.28487
Issue
Section
AAAI Technical Track on Computer Vision VI