BEST: BERT Pre-training for Sign Language Recognition with Coupling Tokenization

Authors

  • Weichao Zhao University of Science and Technology of China
  • Hezhen Hu University of Science and Technology of China
  • Wengang Zhou University of Science and Technology of China Institute of Artificial Intelligence, Hefei Comprehensive National Science Center
  • Jiaxin Shi Huawei Cloud
  • Houqiang Li University of Science and Technology of China Institute of Artificial Intelligence, Hefei Comprehensive National Science Center

DOI:

https://doi.org/10.1609/aaai.v37i3.25470

Keywords:

CV: Language and Vision, CV: Biometrics, Face, Gesture & Pose

Abstract

In this work, we are dedicated to leveraging the BERT pre-training success and modeling the domain-specific statistics to fertilize the sign language recognition~(SLR) model. Considering the dominance of hand and body in sign language expression, we organize them as pose triplet units and feed them into the Transformer backbone in a frame-wise manner. Pre-training is performed via reconstructing the masked triplet unit from the corrupted input sequence, which learns the hierarchical correlation context cues among internal and external triplet units. Notably, different from the highly semantic word token in BERT, the pose unit is a low-level signal originally locating in continuous space, which prevents the direct adoption of the BERT cross entropy objective. To this end, we bridge this semantic gap via coupling tokenization of the triplet unit. It adaptively extracts the discrete pseudo label from the pose triplet unit, which represents the semantic gesture / body state. After pre-training, we fine-tune the pre-trained encoder on the downstream SLR task, jointly with the newly added task-specific layer. Extensive experiments are conducted to validate the effectiveness of our proposed method, achieving new state-of-the-art performance on all four benchmarks with a notable gain.

Downloads

Published

2023-06-26

How to Cite

Zhao, W., Hu, H., Zhou, W., Shi, J., & Li, H. (2023). BEST: BERT Pre-training for Sign Language Recognition with Coupling Tokenization. Proceedings of the AAAI Conference on Artificial Intelligence, 37(3), 3597-3605. https://doi.org/10.1609/aaai.v37i3.25470

Issue

Section

AAAI Technical Track on Computer Vision III