Transformer with Controlled Attention for Synchronous Motion Captioning
DOI:
https://doi.org/10.1609/aaai.v40i11.37821Abstract
In this paper, we address a challenging task, synchronous motion captioning, that aim to generate a language description synchronized with human motion sequences. This task pertains to numerous applications, such as aligned sign language transcription and unsupervised action segmentation and temporal grounding. Our method introduces mechanisms to control self- and cross-attention distributions of the Transformer, allowing interpretability and aligned text generation. We achieve this through masking strategies and structuring losses that push the model to maximize attention only on the most important frames contributing to the generation of a motion word. These constraints aim to prevent undesired mixing of information in attention maps and to provide a monotonic attention distribution across tokens. Thus, the cross attentions of tokens are used for progressive text generation in synchronization with human motion sequences. We demonstrate the superior performance of our approach through evaluation on the two available benchmark datasets, KIT-ML and HumanML3D. As visual evaluation is essential for this task, we provide a comprehensive set of animated visual illustrations of the output of synchronous text generation in the code repository.Downloads
Published
2026-03-14
How to Cite
Radouane, K., Ranwez, S., Lagarde, J., & Tchechmedjiev, A. (2026). Transformer with Controlled Attention for Synchronous Motion Captioning. Proceedings of the AAAI Conference on Artificial Intelligence, 40(11), 8686–8693. https://doi.org/10.1609/aaai.v40i11.37821
Issue
Section
AAAI Technical Track on Computer Vision VIII