An End-to-End Spatio-Temporal Attention Model for Human Action Recognition from Skeleton Data

Authors

  • Sijie Song Peking University
  • Cuiling Lan Microsoft Research Asia
  • Junliang Xing Institute of Automation, Chinese Academy of Sciences
  • Wenjun Zeng Microsoft Research Asia
  • Jiaying Liu Peking University

DOI:

https://doi.org/10.1609/aaai.v31i1.11212

Keywords:

action recognition, LSTM, attention model

Abstract

Human action recognition is an important task in computer vision. Extracting discriminative spatial and temporal features to model the spatial and temporal evolutions of different actions plays a key role in accomplishing this task. In this work, we propose an end-to-end spatial and temporal attention model for human action recognition from skeleton data. We build our model on top of the Recurrent Neural Networks (RNNs) with Long Short-Term Memory (LSTM), which learns to selectively focus on discriminative joints of skeleton within each frame of the inputs and pays different levels of attention to the outputs of different frames. Furthermore, to ensure effective training of the network, we propose a regularized cross-entropy loss to drive the model learning process and develop a joint training strategy accordingly. Experimental results demonstrate the effectiveness of the proposed model, both on the small human action recognition dataset of SBU and the currently largest NTU dataset.

Downloads

Published

2017-02-12

How to Cite

Song, S., Lan, C., Xing, J., Zeng, W., & Liu, J. (2017). An End-to-End Spatio-Temporal Attention Model for Human Action Recognition from Skeleton Data. Proceedings of the AAAI Conference on Artificial Intelligence, 31(1). https://doi.org/10.1609/aaai.v31i1.11212