Spatial-Temporal Multi-Cue Network for Continuous Sign Language Recognition

Authors

  • Hao Zhou University of Science and Technology of China
  • Wengang Zhou University of Science and Technology of China
  • Yun Zhou University of Science and Technology of China
  • Houqiang Li University of Science and Technology of China

DOI:

https://doi.org/10.1609/aaai.v34i07.7001

Abstract

Despite the recent success of deep learning in continuous sign language recognition (CSLR), deep models typically focus on the most discriminative features, ignoring other potentially non-trivial and informative contents. Such characteristic heavily constrains their capability to learn implicit visual grammars behind the collaboration of different visual cues (i,e., hand shape, facial expression and body posture). By injecting multi-cue learning into neural network design, we propose a spatial-temporal multi-cue (STMC) network to solve the vision-based sequence learning problem. Our STMC network consists of a spatial multi-cue (SMC) module and a temporal multi-cue (TMC) module. The SMC module is dedicated to spatial representation and explicitly decomposes visual features of different cues with the aid of a self-contained pose estimation branch. The TMC module models temporal correlations along two parallel paths, i.e., intra-cue and inter-cue, which aims to preserve the uniqueness and explore the collaboration of multiple cues. Finally, we design a joint optimization strategy to achieve the end-to-end sequence learning of the STMC network. To validate the effectiveness, we perform experiments on three large-scale CSLR benchmarks: PHOENIX-2014, CSL and PHOENIX-2014-T. Experimental results demonstrate that the proposed method achieves new state-of-the-art performance on all three benchmarks.

Downloads

Published

2020-04-03

How to Cite

Zhou, H., Zhou, W., Zhou, Y., & Li, H. (2020). Spatial-Temporal Multi-Cue Network for Continuous Sign Language Recognition. Proceedings of the AAAI Conference on Artificial Intelligence, 34(07), 13009-13016. https://doi.org/10.1609/aaai.v34i07.7001

Issue

Section

AAAI Technical Track: Vision