RA-GAR: A Richly Annotated Benchmark for Gait Attribute Recognition

Authors

  • Chenye Wang Beijing Normal University
  • Saihui Hou Beijing Normal University WATRIX.AI
  • Aoqi Li Beijing Normal University
  • Qingyuan Cai Beijing Normal University
  • Yongzhen Huang Beijing Normal University WATRIX.AI

DOI:

https://doi.org/10.1609/aaai.v39i7.32817

Abstract

Gait attracts growing interest from researchers due to its advantages as a non-invasive and non-cooperative biometric feature. Current gait-based attribute recognition methods primarily focus on estimating attributes such as gender, age, and emotions. However, there is insufficient attention to diverse gait attributes in various covariate scenarios. In this paper, we design and collect a Richly Annotated benchmark for 15 gait attributes, named RA-GAR, comprising data from 533 individuals with over 120,000 sequences. To our knowledge, RA-GAR represents the largest and most diverse benchmark of gait attributes currently available. Furthermore, to fully leverage the semantic information and enhance attribute-specific local perception, we propose a two-stage CLIP-based method for Gait Attribute Recognition, named CLIP-GAR. Experiments on the RA-GAR and MA-Gait datasets demonstrate the effectiveness of CLIP-GAR, showing significant improvements in mean accuracy and F1 score.

Downloads

Published

2025-04-11

How to Cite

Wang, C., Hou, S., Li, A., Cai, Q., & Huang, Y. (2025). RA-GAR: A Richly Annotated Benchmark for Gait Attribute Recognition. Proceedings of the AAAI Conference on Artificial Intelligence, 39(7), 7591–7599. https://doi.org/10.1609/aaai.v39i7.32817

Issue

Section

AAAI Technical Track on Computer Vision VI