Contrastive Multi-view Subspace Clustering via Tensor Transformers Autoencoder

Authors

  • Qianqian Wang Xidian University
  • Zihao Zhang Xidian University
  • Wei Feng Xi'an Jiaotong University
  • Zhiqiang Tao Rochester Institute of Technology
  • Quanxue Gao Xidian University

DOI:

https://doi.org/10.1609/aaai.v39i20.35419

Abstract

Multi-view clustering aims to identify consistent and complementary information across multiple views to partition data into clusters, emerging as a popular unsupervised method for multi-view data analysis. However, existing methods often design view-specific encoders to extract distinct features from each view, lacking exploration of their complementarity. Additionally, current contrastive-based multi-view clustering methods may lead to erroneous negative sample pairs conflicting with the clustering objective. To address these challenges, we propose a novel Contrastive Multi-view Subspace Clustering via Tensor Transformers Autoencoder (TTAE). On the one hand, it facilitates information exchange between views by tensor transformers autoencoder, thereby enhancing complementarity. On the other hand, It learns a consistent subspace with a self-expression layer. Meanwhile, adaptive contrastive learning helps to provide more discriminative features for the self-expression learning layer, and the self-expression learning layer in turn supervises contrastive learning. Moreover, our method adaptively selects positive and negative samples for contrastive learning to mitigate the impact of inappropriate negative sample pairs. Extensive experiments on several multi-view datasets demonstrate the effectiveness and superiority of our model.

Downloads

Published

2025-04-11

How to Cite

Wang, Q., Zhang, Z., Feng, W., Tao, Z., & Gao, Q. (2025). Contrastive Multi-view Subspace Clustering via Tensor Transformers Autoencoder. Proceedings of the AAAI Conference on Artificial Intelligence, 39(20), 21207–21215. https://doi.org/10.1609/aaai.v39i20.35419

Issue

Section

AAAI Technical Track on Machine Learning VI