AdaptCLIP: Adapting CLIP for Universal Visual Anomaly Detection
DOI:
https://doi.org/10.1609/aaai.v40i6.42404Abstract
Universal visual anomaly detection aims to identify anomalies from novel or unseen vision domains without additional fine-tuning, which is critical in open scenarios. Recent studies have demonstrated that pre-trained vision-language models like CLIP exhibit strong generalization with just zero or a few normal images. However, existing methods struggle to design prompt templates, handle complex token interactions, or require fine-tuning on target domains, resulting in limited flexibility. In this work, we present a simple yet effective AdaptCLIP based on two key insights. First, adaptive visual and textual representations should be learned alternately rather than jointly. Second, comparative learning between query and normal image prompt should incorporate both contextual and aligned residual features, rather than relying solely on residual features. AdaptCLIP treats CLIP models as a foundational service, adding only three simple adapters, visual adapter, textual adapter, and prompt-query adapter, at its input or output ends. AdaptCLIP supports zero-/few-shot generalization across domains and provides a training-free approach on target domains once trained on a base dataset. AdaptCLIP achieves state-of-the-art performance on 12 anomaly detection benchmarks from industrial and medical domains, significantly outperforming existing competitive methods.Downloads
Published
2026-03-14
How to Cite
Gao, B.-B., Zhou, Y., Yan, J., Cai, Y., Zhang, W., Wang, M., … Wang, C. (2026). AdaptCLIP: Adapting CLIP for Universal Visual Anomaly Detection. Proceedings of the AAAI Conference on Artificial Intelligence, 40(6), 4095–4103. https://doi.org/10.1609/aaai.v40i6.42404
Issue
Section
AAAI Technical Track on Computer Vision III