CLIP-CID: Efficient CLIP Distillation via Cluster-Instance Discrimination
DOI:
https://doi.org/10.1609/aaai.v39i20.35505Abstract
Contrastive Language-Image Pre-training (CLIP) has achieved excellent performance over a wide range of tasks. However, the effectiveness of CLIP heavily relies on a substantial corpus of pre-training data, resulting in notable consumption of computational resources. Although knowledge distillation has been widely applied in single modality models, how to efficiently expand knowledge distillation to vision-language foundation models with extensive data remains relatively unexplored. In this paper, we introduce CLIP-CID, a novel distillation mechanism that effectively transfers knowledge from a large vision-language foundation model to a smaller model. We initially propose a simple but efficient image semantic balance method to reduce transfer learning bias and improve distillation efficiency. This method filters out 43.7% of image-text pairs from the LAION400M while maintaining superior performance. After that, we leverage cluster-instance discrimination to facilitate knowledge transfer from the teacher model to the student model, thereby empowering the student model to acquire a holistic semantic comprehension of the pre-training data. Experimental results demonstrate that CLIP-CID achieves state-of-the-art performance on various downstream tasks including linear probe and zero-shot classification.Published
2025-04-11
How to Cite
Yang, K., Gu, T., An, X., Jiang, H., Dai, X., Feng, Z., … Deng, J. (2025). CLIP-CID: Efficient CLIP Distillation via Cluster-Instance Discrimination. Proceedings of the AAAI Conference on Artificial Intelligence, 39(20), 21974–21982. https://doi.org/10.1609/aaai.v39i20.35505
Issue
Section
AAAI Technical Track on Machine Learning VI