ShaLa: Multimodal Shared Latent Generative Modelling

Authors

  • Jiali Cui Stevens Institute of Technology
  • Yan-Ying Chen Toyota Research Institute
  • Yanxia Zhang Toyota Research Institute
  • Matthew Klenk Toyota Research Institute

DOI:

https://doi.org/10.1609/aaai.v40i25.39203

Abstract

This paper presents a novel generative framework for learning shared latent representations across multimodal data. Many advanced multimodal methods focus on capturing all combinations of modality-specific details across inputs, which can inadvertently obscure the high-level semantic concepts that are shared across modalities. Notably, Multimodal VAEs with low-dimensional latent variables are designed to capture shared representations, enabling various tasks such as joint multimodal synthesis and cross-modal inference. However, multimodal VAEs often struggle to design expressive joint variational posteriors and suffer from low-quality synthesis. In this work, ShaLa addresses these challenges by integrating a novel architectural inference model and a second-stage expressive diffusion prior, which not only facilitates effective inference of shared latent representation but also significantly improves the quality of downstream multimodal synthesis. We validate ShaLa extensively across multiple benchmarks, demonstrating superior coherence and synthesis quality compared to state-of-the-art multimodal VAEs. Furthermore, ShaLa scales to many more modalities while prior multimodal VAEs have fallen short in capturing the increasing complexity of the shared latent space.

Downloads

Published

2026-03-14

How to Cite

Cui, J., Chen, Y.-Y., Zhang, Y., & Klenk, M. (2026). ShaLa: Multimodal Shared Latent Generative Modelling. Proceedings of the AAAI Conference on Artificial Intelligence, 40(25), 20658–20666. https://doi.org/10.1609/aaai.v40i25.39203

Issue

Section

AAAI Technical Track on Machine Learning II