Video Saliency Detection via Dynamic Consistent Spatio-Temporal Attention Modelling

Authors

  • Sheng-hua Zhong The Hong Kong Polytechnic University
  • Yan Liu The Hong Kong Polytechnic University
  • Feifei Ren The Hong Kong Polytechnic University and Shandong Normal University
  • Jinghuan Zhang Shandong Normal University
  • Tongwei Ren Nanjing University

DOI:

https://doi.org/10.1609/aaai.v27i1.8642

Keywords:

Video Saliency Map, Spatio-Temporal Attention Model, Optical Flow

Abstract

Human vision system actively seeks salient regions and movements in video sequences to reduce the search effort. Modeling computational visual saliency map provides im-portant information for semantic understanding in many real world applications. In this paper, we propose a novel video saliency detection model for detecting the attended regions that correspond to both interesting objects and dominant motions in video sequences. In spatial saliency map, we in-herit the classical bottom-up spatial saliency map. In tem-poral saliency map, a novel optical flow model is proposed based on the dynamic consistency of motion. The spatial and the temporal saliency maps are constructed and further fused together to create a novel attention model. The pro-posed attention model is evaluated on three video datasets. Empirical validations demonstrate the salient regions de-tected by our dynamic consistent saliency map highlight the interesting objects effectively and efficiency. More im-portantly, the automatically video attended regions detected by proposed attention model are consistent with the ground truth saliency maps of eye movement data.

Downloads

Published

2013-06-30

How to Cite

Zhong, S.- hua, Liu, Y., Ren, F., Zhang, J., & Ren, T. (2013). Video Saliency Detection via Dynamic Consistent Spatio-Temporal Attention Modelling. Proceedings of the AAAI Conference on Artificial Intelligence, 27(1), 1063-1069. https://doi.org/10.1609/aaai.v27i1.8642