Memory-Augmented Temporal Dynamic Learning for Action Recognition


  • Yuan Yuan Northwestern Polytechnical University
  • Dong Wang Northwestern Polytechnical University
  • Qi Wang Northwestern Polytechnical University



Human actions captured in video sequences contain two crucial factors for action recognition, i.e., visual appearance and motion dynamics. To model these two aspects, Convolutional and Recurrent Neural Networks (CNNs and RNNs) are adopted in most existing successful methods for recognizing actions. However, CNN based methods are limited in modeling long-term motion dynamics. RNNs are able to learn temporal motion dynamics but lack effective ways to tackle unsteady dynamics in long-duration motion. In this work, we propose a memory-augmented temporal dynamic learning network, which learns to write the most evident information into an external memory module and ignore irrelevant ones. In particular, we present a differential memory controller to make a discrete decision on whether the external memory module should be updated with current feature. The discrete memory controller takes in the memory history, context embedding and current feature as inputs and controls information flow into the external memory module. Additionally, we train this discrete memory controller using straight-through estimator. We evaluate this end-to-end system on benchmark datasets (UCF101 and HMDB51) of human action recognition. The experimental results show consistent improvements on both datasets over prior works and our baselines.




How to Cite

Yuan, Y., Wang, D., & Wang, Q. (2019). Memory-Augmented Temporal Dynamic Learning for Action Recognition. Proceedings of the AAAI Conference on Artificial Intelligence, 33(01), 9167-9175.



AAAI Technical Track: Vision