Self-Emphasizing Network for Continuous Sign Language Recognition

Authors

  • Lianyu Hu College of Intelligence and Computing, Tianjin University, Tianjin 300350, China
  • Liqing Gao College of Intelligence and Computing, Tianjin University, Tianjin 300350, China
  • Zekang Liu College of Intelligence and Computing, Tianjin University, Tianjin 300350, China
  • Wei Feng College of Intelligence and Computing, Tianjin University, Tianjin 300350, China

DOI:

https://doi.org/10.1609/aaai.v37i1.25164

Keywords:

CV: Language and Vision, CV: 3D Computer Vision, CV: Applications, CV: Multi-modal Vision, CV: Video Understanding & Activity Analysis

Abstract

Hand and face play an important role in expressing sign language. Their features are usually especially leveraged to improve system performance. However, to effectively extract visual representations and capture trajectories for hands and face, previous methods always come at high computations with increased training complexity. They usually employ extra heavy pose-estimation networks to locate human body keypoints or rely on additional pre-extracted heatmaps for supervision. To relieve this problem, we propose a self-emphasizing network (SEN) to emphasize informative spatial regions in a self-motivated way, with few extra computations and without additional expensive supervision. Specifically, SEN first employs a lightweight subnetwork to incorporate local spatial-temporal features to identify informative regions, and then dynamically augment original features via attention maps. It's also observed that not all frames contribute equally to recognition. We present a temporal self-emphasizing module to adaptively emphasize those discriminative frames and suppress redundant ones. A comprehensive comparison with previous methods equipped with hand and face features demonstrates the superiority of our method, even though they always require huge computations and rely on expensive extra supervision. Remarkably, with few extra computations, SEN achieves new state-of-the-art accuracy on four large-scale datasets, PHOENIX14, PHOENIX14-T, CSL-Daily, and CSL. Visualizations verify the effects of SEN on emphasizing informative spatial and temporal features. Code is available at https://github.com/hulianyuyy/SEN_CSLR

Downloads

Published

2023-06-26

How to Cite

Hu, L., Gao, L., Liu, Z., & Feng, W. (2023). Self-Emphasizing Network for Continuous Sign Language Recognition. Proceedings of the AAAI Conference on Artificial Intelligence, 37(1), 854-862. https://doi.org/10.1609/aaai.v37i1.25164

Issue

Section

AAAI Technical Track on Computer Vision I