Appearance-Motion Memory Consistency Network for Video Anomaly Detection

Authors

  • Ruichu Cai Guangdong University of Technology
  • Hao Zhang Guangdong University of Technology
  • Wen Liu ShanghaiTech University, Shanghai Institute of Microsystem and Information Technology, Chinese Academy of Sciences
  • Shenghua Gao Shanghaitech University
  • Zhifeng Hao Guangdong University of Technology

DOI:

https://doi.org/10.1609/aaai.v35i2.16177

Keywords:

Video Understanding & Activity Analysis

Abstract

Abnormal event detection in the surveillance video is an essential but challenging task, and many methods have been proposed to deal with this problem. The previous methods either only consider the appearance information or directly integrate the results of appearance and motion information without considering their endogenous consistency semantics explicitly. Inspired by the rule humans identify the abnormal frames from multi-modality signals, we propose an Appearance-Motion Memory Consistency Network (AMMC-Net). Our method first makes full use of the prior knowledge of appearance and motion signals to explicitly capture the correspondence between them in the high-level feature space. Then, it combines the multi-view features to obtain a more essential and robust feature representation of regular events, which can significantly increase the gap between an abnormal and a regular event. In the anomaly detection phase, we further introduce a commit error in the latent space joint with the prediction error in pixel space to enhance the detection accuracy. Solid experimental results on various standard datasets validate the effectiveness of our approach.

Downloads

Published

2021-05-18

How to Cite

Cai, R., Zhang, H., Liu, W., Gao, S., & Hao, Z. (2021). Appearance-Motion Memory Consistency Network for Video Anomaly Detection. Proceedings of the AAAI Conference on Artificial Intelligence, 35(2), 938-946. https://doi.org/10.1609/aaai.v35i2.16177

Issue

Section

AAAI Technical Track on Computer Vision I