Hand-Model-Aware Sign Language Recognition
Keywords:Language and Vision
AbstractHand gestures play a dominant role in the expression of sign language. Current deep-learning based video sign language recognition (SLR) methods usually follow a data-driven paradigm under the supervision of the category label. However, those methods suffer limited interpretability and may encounter the overfitting issue due to limited sign data sources. In this paper, we introduce the hand prior and propose a new hand-model-aware framework for isolated SLR with the modeling hand as the intermediate representation. We first transform the cropped hand sequence into the latent semantic feature. Then the hand model introduces the hand prior and provides a mapping from the semantic feature to the compact hand pose representation. Finally, the inference module enhances the spatio-temporal pose representation and performs the final recognition. Due to the lack of annotation on the hand pose under current sign language datasets, we further guide its learning by utilizing multiple weakly-supervised losses to constrain its spatial and temporal consistency. To validate the effectiveness of our method, we perform extensive experiments on four benchmark datasets, including NMFs-CSL, SLR500, MSASL and WLASL. Experimental results demonstrate that our method achieves state-of-the-art performance on all four popular benchmarks with a notable margin.
How to Cite
Hu, H., Zhou, W., & Li, H. (2021). Hand-Model-Aware Sign Language Recognition. Proceedings of the AAAI Conference on Artificial Intelligence, 35(2), 1558-1566. Retrieved from https://ojs.aaai.org/index.php/AAAI/article/view/16247
AAAI Technical Track on Computer Vision I