TCNet: Continuous Sign Language Recognition from Trajectories and Correlated Regions
DOI:
https://doi.org/10.1609/aaai.v38i4.28181Keywords:
CV: Language and Vision, CV: Multi-modal VisionAbstract
A key challenge in continuous sign language recognition (CSLR) is to efficiently capture long-range spatial interactions over time from the video input. To address this challenge, we propose TCNet, a hybrid network that effectively models spatio-temporal information from Trajectories and Correlated regions. TCNet's trajectory module transforms frames into aligned trajectories composed of continuous visual tokens. This facilitates extracting region trajectory patterns. In addition, for a query token, self-attention is learned along the trajectory. As such, our network can also focus on fine-grained spatio-temporal patterns, such as finger movement, of a region in motion. TCNet's correlation module utilizes a novel dynamic attention mechanism that filters out irrelevant frame regions. Additionally, it assigns dynamic key-value tokens from correlated regions to each query. Both innovations significantly reduce the computation cost and memory. We perform experiments on four large-scale datasets: PHOENIX14, PHOENIX14-T, CSL, and CSL-Daily. Our results demonstrate that TCNet consistently achieves state-of-the-art performance. For example, we improve over the previous state-of-the-art by 1.5\% and 1.0\% word error rate on PHOENIX14 and PHOENIX14-T, respectively. Code is available at https://github.com/hotfinda/TCNetDownloads
Published
2024-03-24
How to Cite
Lu, H., Salah, A. A., & Poppe, R. (2024). TCNet: Continuous Sign Language Recognition from Trajectories and Correlated Regions. Proceedings of the AAAI Conference on Artificial Intelligence, 38(4), 3891–3899. https://doi.org/10.1609/aaai.v38i4.28181
Issue
Section
AAAI Technical Track on Computer Vision III