DiffCLIP: Few-shot Language-driven Multimodal Classifier

Authors

  • Jiaqing Zhang Xidian University
  • Mingxiang Cao Xidian University
  • Xue Yang Shanghai AI Laboratory
  • Kai Jiang Xidian University
  • Yunsong Li Xidian University

DOI:

https://doi.org/10.1609/aaai.v39i21.34401

Abstract

Visual language models like Contrastive Language-Image Pretraining (CLIP) have shown impressive performance in analyzing natural images with language information. However, these models often encounter challenges when applied to specialized domains such as remote sensing due to the limited availability of image-text pairs for training. To tackle this issue, we introduce DiffCLIP, a novel framework that extends CLIP to effectively convey comprehensive language-driven semantic information for accurate classification of high-dimensional multimodal remote sensing images. DiffCLIP is a few-shot learning method that leverages unlabeled images for pretraining. It employs unsupervised mask diffusion learning to capture the distribution of diverse modalities without requiring labels. The modality-shared image encoder maps multimodal data into a unified subspace, extracting shared features with consistent parameters across modalities. A well-trained image encoder further enhances learning by aligning visual representations with class-label text information from CLIP. By integrating these approaches, DiffCLIP significantly boosts CLIP performance using a minimal number of image-text pairs. We evaluate DiffCLIP on widely used high-dimensional multimodal datasets, demonstrating its effectiveness in addressing few-shot annotated classification tasks. DiffCLIP achieves an overall accuracy improvement of 10.65% across three remote sensing datasets compared with CLIP, while utilizing only 2-shot image-text pairs.

Published

2025-04-11

How to Cite

Zhang, J., Cao, M., Yang, X., Jiang, K., & Li, Y. (2025). DiffCLIP: Few-shot Language-driven Multimodal Classifier. Proceedings of the AAAI Conference on Artificial Intelligence, 39(21), 22443-22451. https://doi.org/10.1609/aaai.v39i21.34401

Issue

Section

AAAI Technical Track on Machine Learning VII