Learning Comprehensive Motion Representation for Action Recognition

Authors

  • Mingyu Wu MoE Key Lab of Artificial Intelligence, AI Institute, Shanghai Jiao Tong University
  • Boyuan Jiang Youtu Lab, Tencent
  • Donghao Luo Youtu Lab, Tencent
  • Junchi Yan MoE Key Lab of Artificial Intelligence, AI Institute, Shanghai Jiao Tong University Department of Computer Science and Engineering, Shanghai Jiao Tong University
  • Yabiao Wang Youtu Lab, Tencent
  • Ying Tai Youtu Lab, Tencent
  • Chengjie Wang Youtu Lab, Tencent
  • Jilin Li Youtu Lab, Tencent
  • Feiyue Huang Youtu Lab, Tencent
  • Xiaokang Yang MoE Key Lab of Artificial Intelligence, AI Institute, Shanghai Jiao Tong University

DOI:

https://doi.org/10.1609/aaai.v35i4.16400

Keywords:

Video Understanding & Activity Analysis

Abstract

For action recognition learning, 2D CNN-based methods are efficient but may yield redundant features due to applying the same 2D convolution kernel to each frame. Recent efforts attempt to capture motion information by establishing inter-frame connections while still suffering the limited temporal receptive field or high latency. Moreover, the feature enhancement is often only performed by channel or space dimension in action recognition. To address these issues, we first devise a Channel-wise Motion Enhancement (CME) module to adaptively emphasize the channels related to dynamic information with a channel-wise gate vector. The channel gates generated by CME incorporate the information from all the other frames in the video. We further propose a Spatial-wise Motion Enhancement (SME) module to focus on the regions with the critical target in motion, according to the point-to-point similarity between adjacent feature maps. The intuition is that the change of background is typically slower than the motion area. Both CME and SME have clear physical meaning in capturing action clues. By integrating the two modules into the off-the-shelf 2D network, we finally obtain a Comprehensive Motion Representation (CMR) learning method for action recognition, which achieves competitive performance on Something-Something V1 & V2 and Kinetics-400. On the temporal reasoning datasets Something-Something V1 and V2, our method outperforms the current state-of-the-art by 2.3% and 1.9% when using 16 frames as input, respectively.

Downloads

Published

2021-05-18

How to Cite

Wu, M., Jiang, B., Luo, D., Yan, J., Wang, Y., Tai, Y., Wang, C., Li, J., Huang, F., & Yang, X. (2021). Learning Comprehensive Motion Representation for Action Recognition. Proceedings of the AAAI Conference on Artificial Intelligence, 35(4), 2934-2942. https://doi.org/10.1609/aaai.v35i4.16400

Issue

Section

AAAI Technical Track on Computer Vision III