VA-AR: Learning Velocity-Aware Action Representations with Mixture of Window Attention

Authors

  • Jiangning Wei Beijing University of Posts and Telecommunications
  • Lixiong Qin Beijing University of Posts and Telecommunications
  • Bo Yu Beijing University of Posts and Telecommunications
  • Tianjian Zou Beijing University of Posts and Telecommunications
  • Chuhan Yan Macau University of Science and Technology
  • Dandan Xiao China Institute of Sport Science
  • Yang Yu Beijing Sport University
  • Lan Yang Beijing University of Posts and Telecommunications
  • Ke Li Beijing University of Posts and Telecommunications
  • Jun Liu Beijing University of Posts and Telecommunications

DOI:

https://doi.org/10.1609/aaai.v39i8.32894

Abstract

Action recognition is a crucial task in artificial intelligence, with significant implications across various domains. We initially perform a comprehensive analysis of seven prominent action recognition methods across five widely-used datasets. This analysis reveals a critical, yet previously overlooked, observation: as the velocity of actions increases, the performance of these methods variably declines, undermining their robustness. This decline in performance poses significant challenges for their application in real-world scenarios. Building on these findings, we introduce the Velocity-Aware Action Recognition (VA-AR) framework to obtain robust action representations across different velocities. Our principal insight is that rapid actions (e.g., the giant circle backward in uneven bars or a smash in badminton) occur within short time intervals, necessitating smaller temporal attention windows to accurately capture intricate changes. Conversely, slower actions (e.g., drinking water or wiping face) require larger windows to effectively encompass the broader context. VA-AR employs a Mixture of Window Attention (MoWA) strategy, dynamically adjusting its attention window size based on the action's velocity. This adjustment enables VA-AR to obtain a velocity-aware representation, thereby enhancing the accuracy of action recognition. Extensive experiments confirm that VA-AR achieves state-of-the-art performance on the same five datasets, demonstrating VA-AR's effectiveness across a broad spectrum of action recognition scenarios.

Downloads

Published

2025-04-11

How to Cite

Wei, J., Qin, L., Yu, B., Zou, T., Yan, C., Xiao, D., Yu, Y., Yang, L., Li, K., & Liu, J. (2025). VA-AR: Learning Velocity-Aware Action Representations with Mixture of Window Attention. Proceedings of the AAAI Conference on Artificial Intelligence, 39(8), 8286-8294. https://doi.org/10.1609/aaai.v39i8.32894

Issue

Section

AAAI Technical Track on Computer Vision VII