Self-Training Multi-Sequence Learning with Transformer for Weakly Supervised Video Anomaly Detection

Authors

  • Shuo Li Key Laboratory of Intelligent Perception and Image Understanding of Ministry of Education International Research Center for Intelligent Perception and Computation Joint International Research Laboratory of Intelligent Perception and Computation School of Artificial Intelligent, Xidian University, Xi'an, 710071, P.R. China
  • Fang Liu Key Laboratory of Intelligent Perception and Image Understanding of Ministry of Education International Research Center for Intelligent Perception and Computation Joint International Research Laboratory of Intelligent Perception and Computation School of Artificial Intelligent, Xidian University, Xi'an, 710071, P.R. China
  • Licheng Jiao Key Laboratory of Intelligent Perception and Image Understanding of Ministry of Education International Research Center for Intelligent Perception and Computation Joint International Research Laboratory of Intelligent Perception and Computation School of Artificial Intelligent, Xidian University, Xi'an, 710071, P.R. China

DOI:

https://doi.org/10.1609/aaai.v36i2.20028

Keywords:

Computer Vision (CV), Machine Learning (ML)

Abstract

Weakly supervised Video Anomaly Detection (VAD) using Multi-Instance Learning (MIL) is usually based on the fact that the anomaly score of an abnormal snippet is higher than that of a normal snippet. In the beginning of training, due to the limited accuracy of the model, it is easy to select the wrong abnormal snippet. In order to reduce the probability of selection errors, we first propose a Multi-Sequence Learning (MSL) method and a hinge-based MSL ranking loss that uses a sequence composed of multiple snippets as an optimization unit. We then design a Transformer-based MSL network to learn both video-level anomaly probability and snippet-level anomaly scores. In the inference stage, we propose to use the video-level anomaly probability to suppress the fluctuation of snippet-level anomaly scores. Finally, since VAD needs to predict the snippet-level anomaly scores, by gradually reducing the length of selected sequence, we propose a self-training strategy to gradually refine the anomaly scores. Experimental results show that our method achieves significant improvements on ShanghaiTech, UCF-Crime, and XD-Violence.

Downloads

Published

2022-06-28

How to Cite

Li, S., Liu, F., & Jiao, L. (2022). Self-Training Multi-Sequence Learning with Transformer for Weakly Supervised Video Anomaly Detection. Proceedings of the AAAI Conference on Artificial Intelligence, 36(2), 1395-1403. https://doi.org/10.1609/aaai.v36i2.20028

Issue

Section

AAAI Technical Track on Computer Vision II