Multi-Level Cross-Modal Alignment for Image Clustering

Authors

  • Liping Qiu Shenzhen University
  • Qin Zhang Shenzhen University
  • Xiaojun Chen Shenzhen University
  • Shaotian Cai Shenzhen University

DOI:

https://doi.org/10.1609/aaai.v38i13.29387

Keywords:

ML: Clustering, ML: Unsupervised & Self-Supervised Learning

Abstract

Recently, the cross-modal pretraining model has been employed to produce meaningful pseudo-labels to supervise the training of an image clustering model. However, numerous erroneous alignments in a cross-modal pretraining model could produce poor-quality pseudo labels and degrade clustering performance. To solve the aforementioned issue, we propose a novel Multi-level Cross-modal Alignment method to improve the alignments in a cross-modal pretraining model for downstream tasks, by building a smaller but better semantic space and aligning the images and texts in three levels, i.e., instance-level, prototype-level, and semantic-level. Theoretical results show that our proposed method converges, and suggests effective means to reduce the expected clustering risk of our method. Experimental results on five benchmark datasets clearly show the superiority of our new method.

Published

2024-03-24

How to Cite

Qiu, L., Zhang, Q., Chen, X., & Cai, S. (2024). Multi-Level Cross-Modal Alignment for Image Clustering. Proceedings of the AAAI Conference on Artificial Intelligence, 38(13), 14695-14703. https://doi.org/10.1609/aaai.v38i13.29387

Issue

Section

AAAI Technical Track on Machine Learning IV