M²VAE: Multi-Modal Multi-View Variational Autoencoder for Cold-start Item Recommendation
DOI:
https://doi.org/10.1609/aaai.v40i17.38501Abstract
Cold-start item recommendation is a significant challenge in recommendation systems, particularly when new items are introduced without any historical interaction data. While existing methods leverage multi-modal content to alleviate the cold-start issue, they often neglect the inherent multi-view structure of modalities, namely the distinction between shared and modality-specific features. In this paper, we propose Multi-Modal Multi-View Variational AutoEncoder (M²VAE), a generative model that addresses the challenges of modeling common and unique views in attribute and multi-modal features, as well as user preferences over single-typed item features. Specifically, we generate type-specific latent variables for item IDs, categorical attributes, and image features, and use Product-of-Experts (PoE) to derive a common representation. A disentangled contrastive loss decouples the common view from unique views while preserving feature informativeness. To model user inclinations, we employ a user-aware hierarchical Mixture-of-Experts (MoE) to adaptively fuse representations. We further incorporate co-occurrence signals via contrastive learning, eliminating the need for pretraining. Extensive experiments on real-world datasets validate the effectiveness of our approach.Downloads
Published
2026-03-14
How to Cite
He, C., Liu, Y., Li, Q., Hong, C., Zhong, W., & Yao, X.-W. (2026). M²VAE: Multi-Modal Multi-View Variational Autoencoder for Cold-start Item Recommendation. Proceedings of the AAAI Conference on Artificial Intelligence, 40(17), 14811–14819. https://doi.org/10.1609/aaai.v40i17.38501
Issue
Section
AAAI Technical Track on Data Mining & Knowledge Management I