DMCAR: Disentangled Mixture-of-Experts with Context-Aware Routing for Multi-View Clustering

Authors

  • Baili Xiao College of Computer Science and Technology,National University of Defense Technology
  • Ke Liang College of Computer Science and Technology,National University of Defense Technology
  • Jiaqi Jin College of Computer Science and Technology,National University of Defense Technology
  • Jun Wang College of Computer Science and Technology,National University of Defense Technology
  • Yinbo Xu College of Intelligence Science and Technology,National University of Defense Technology
  • Siwei Wang Intelligent Game and Decision Lab
  • En Zhu College of Computer Science and Technology,National University of Defense Technology

DOI:

https://doi.org/10.1609/aaai.v40i32.39919

Abstract

Multi-View Clustering (MVC) aims to enhance clustering performance by integrating multi-source complementary information. However, existing deep MVC methods face inherent challenges in balancing the learning of shared consensus representations with the preservation of view-specific information: independent encoders hinder effective cross-view collaboration, while a single shared encoder tends to sacrifice representation diversity. Although the recently introduced Mixture-of-Experts (MoE) model offers a novel approach to facilitating view collaboration, its flattened expert pool design often leads to entanglement between shared and specific information, and its routing mechanism limits collaboration potential by neglecting cross-view context. To address these challenges, this paper proposes a novel deep multi-view clustering framework—Decoupled Mixture-of-Experts with Context-Aware Routing for Multi-View Clustering (DMCAR-MVC). At its core is an innovative Decoupled MoE (D-MoE) architecture. We establish a public expert pool to learn cross-view shared representations while equipping each view with an independent private expert pool to capture its unique information, thereby structurally enforcing the decoupling of shared and specific representations. Building on this, we further design a Context-Aware Hierarchical Routing (CAHR) mechanism. When routing for the public expert pool, this mechanism introduces a global context vector to guide expert selection, enabling more efficient and globally informed cross-view collaboration. Finally, to optimize the model, we adopt a multi-level contrastive learning paradigm: on one hand, a cross-view alignment loss ensures semantic consistency in shared representations; on the other, an orthogonality constraint is imposed to further enhance separability between shared and specific representations. Extensive experiments on multiple benchmark datasets demonstrate that DMCAR-MVC significantly outperforms state-of-the-art methods across key clustering metrics. Additionally, comprehensive ablation studies thoroughly validate the effectiveness and necessity of each proposed component.

Downloads

Published

2026-03-14

How to Cite

Xiao, B., Liang, K., Jin, J., Wang, J., Xu, Y., Wang, S., & Zhu, E. (2026). DMCAR: Disentangled Mixture-of-Experts with Context-Aware Routing for Multi-View Clustering. Proceedings of the AAAI Conference on Artificial Intelligence, 40(32), 27055–27063. https://doi.org/10.1609/aaai.v40i32.39919

Issue

Section

AAAI Technical Track on Machine Learning IX