VadCLIP: Adapting Vision-Language Models for Weakly Supervised Video Anomaly Detection

Authors

  • Peng Wu Northwestern Polytechnical University
  • Xuerong Zhou Northwestern Polytechnical University
  • Guansong Pang Singapore Management University
  • Lingru Zhou Northwestern Polytechnical University
  • Qingsen Yan Northwestern Polytechnical University
  • Peng Wang Northwestern Polytechnical University
  • Yanning Zhang Northwestern Polytechnical University

DOI:

https://doi.org/10.1609/aaai.v38i6.28423

Keywords:

CV: Video Understanding & Activity Analysis, CV: Image and Video Retrieval, CV: Language and Vision, CV: Multi-modal Vision, CV: Scene Analysis & Understanding

Abstract

The recent contrastive language-image pre-training (CLIP) model has shown great success in a wide range of image-level tasks, revealing remarkable ability for learning powerful visual representations with rich semantics. An open and worthwhile problem is efficiently adapting such a strong model to the video domain and designing a robust video anomaly detector. In this work, we propose VadCLIP, a new paradigm for weakly supervised video anomaly detection (WSVAD) by leveraging the frozen CLIP model directly without any pre-training and fine-tuning process. Unlike current works that directly feed extracted features into the weakly supervised classifier for frame-level binary classification, VadCLIP makes full use of fine-grained associations between vision and language on the strength of CLIP and involves dual branch. One branch simply utilizes visual features for coarse-grained binary classification, while the other fully leverages the fine-grained language-image alignment. With the benefit of dual branch, VadCLIP achieves both coarse-grained and fine-grained video anomaly detection by transferring pre-trained knowledge from CLIP to WSVAD task. We conduct extensive experiments on two commonly-used benchmarks, demonstrating that VadCLIP achieves the best performance on both coarse-grained and fine-grained WSVAD, surpassing the state-of-the-art methods by a large margin. Specifically, VadCLIP achieves 84.51% AP and 88.02% AUC on XD-Violence and UCF-Crime, respectively. Code and features are released at https://github.com/nwpu-zxr/VadCLIP.

Published

2024-03-24

How to Cite

Wu, P., Zhou, X., Pang, G., Zhou, L., Yan, Q., Wang, P., & Zhang, Y. (2024). VadCLIP: Adapting Vision-Language Models for Weakly Supervised Video Anomaly Detection. Proceedings of the AAAI Conference on Artificial Intelligence, 38(6), 6074-6082. https://doi.org/10.1609/aaai.v38i6.28423

Issue

Section

AAAI Technical Track on Computer Vision V