Learning Event-Relevant Factors for Video Anomaly Detection
DOI:
https://doi.org/10.1609/aaai.v37i2.25334Keywords:
CV: Video Understanding & Activity Analysis, CV: ApplicationsAbstract
Most video anomaly detection methods discriminate events that deviate from normal patterns as anomalies. However, these methods are prone to interferences from event-irrelevant factors, such as background textures and object scale variations, incurring an increased false detection rate. In this paper, we propose to explicitly learn event-relevant factors to eliminate the interferences from event-irrelevant factors on anomaly predictions. To this end, we introduce a causal generative model to separate the event-relevant factors and event-irrelevant ones in videos, and learn the prototypes of event-relevant factors in a memory augmentation module. We design a causal objective function to optimize the causal generative model and develop a counterfactual learning strategy to guide anomaly predictions, which increases the influence of the event-relevant factors. The extensive experiments show the effectiveness of our method for video anomaly detection.Downloads
Published
2023-06-26
How to Cite
Sun, C., Shi, C., Jia, Y., & Wu, Y. (2023). Learning Event-Relevant Factors for Video Anomaly Detection. Proceedings of the AAAI Conference on Artificial Intelligence, 37(2), 2384-2392. https://doi.org/10.1609/aaai.v37i2.25334
Issue
Section
AAAI Technical Track on Computer Vision II