Towards To-a-T Spatio-Temporal Focus for Skeleton-Based Action Recognition


  • Lipeng Ke University at Buffalo, State University of New York
  • Kuan-Chuan Peng Mitsubishi Electric Research Laboratories
  • Siwei Lyu University at Buffalo, State University of New York



Computer Vision (CV)


Graph Convolutional Networks (GCNs) have been widely used to model the high-order dynamic dependencies for skeleton-based action recognition. Most existing approaches do not explicitly embed the high-order spatio-temporal importance to joints’ spatial connection topology and intensity, and they do not have direct objectives on their attention module to jointly learn when and where to focus on in the action sequence. To address these problems, we propose the To-a-T Spatio-Temporal Focus (STF), a skeleton-based action recognition framework that utilizes the spatio-temporal gradient to focus on relevant spatio-temporal features. We first propose the STF modules with learnable gradient-enforced and instance-dependent adjacency matrices to model the high-order spatio-temporal dynamics. Second, we propose three loss terms defined on the gradient-based spatio-temporal focus to explicitly guide the classifier when and where to look at, distinguish confusing classes, and optimize the stacked STF modules. STF outperforms the state-of-the-art methods on the NTU RGB+D 60, NTU RGB+D 120, and Kinetics Skeleton 400 datasets in all 15 settings over different views, subjects, setups, and input modalities, and STF also shows better accuracy on scarce data and dataset shifting settings.




How to Cite

Ke, L., Peng, K.-C., & Lyu, S. (2022). Towards To-a-T Spatio-Temporal Focus for Skeleton-Based Action Recognition. Proceedings of the AAAI Conference on Artificial Intelligence, 36(1), 1131-1139.



AAAI Technical Track on Computer Vision I