SUGAR: Learning Skeleton Representation with Visual-Motion Knowledge for Action Recognition

Authors

  • Qilang Ye VCIP & TMCC & DISSec, College of Computer Science & College of Cryptology and Cyber Science, Nankai University Zhongguancun Academy
  • Yu Zhou VCIP & TMCC & DISSec, College of Computer Science & College of Cryptology and Cyber Science, Nankai University Zhongguancun Academy
  • Lian He School of Computer Science & Technology, Beijing Institute of Technology Zhongguancun Academy
  • Jie Zhang Great Bay University
  • Xuanming Guo Zhongguancun Academy
  • Jiayu Zhang Great Bay University
  • Mingkui Tan South China University of Technology
  • Weicheng Xie Shenzhen University
  • Yue Sun Macao Polytechnic University
  • Tao Tan Macao Polytechnic University
  • Xiaochen Yuan Macao Polytechnic University
  • Ghada Khoriba Nile University
  • Zitong Yu Great Bay University Dongguan Key Laboratory for Intelligence and Information Technology

DOI:

https://doi.org/10.1609/aaai.v40i21.38852

Abstract

Large Language Models (LLMs) hold rich implicit knowledge and powerful transferability. In this paper, we explore the combination of LLMs with the human skeleton to perform action classification and description. However, when treating LLM as a recognizer, two questions arise: 1) How can LLMs understand the skeleton? 2) How can LLMs distinguish among actions? To address these problems, we introduce a novel paradigm named learning Skeleton representation with visual-motion knowledge for Action Recognition (SUGAR). In our pipeline, we first utilize off-the-shelf large-scale video models as a knowledge base to generate visual, motion information related to actions. Then, we propose to supervise skeleton learning through this prior knowledge to yield discrete representations. Finally, we use the LLM with untouched pre-training weights to understand these representations and generate the desired action targets and descriptions. Notably, we present a Temporal Query Projection (TQP) module to continuously model the skeleton signals with long sequences. Experiments on several skeleton-based action classification benchmarks demonstrate the efficacy of our SUGAR. Moreover, experiments on zero-shot scenarios show that SUGAR is more versatile than linear-based methods.

Downloads

Published

2026-03-14

How to Cite

Ye, Q., Zhou, Y., He, L., Zhang, J., Guo, X., Zhang, J., Tan, M., Xie, W., Sun, Y., Tan, T., Yuan, X., Khoriba, G., & Yu, Z. (2026). SUGAR: Learning Skeleton Representation with Visual-Motion Knowledge for Action Recognition. Proceedings of the AAAI Conference on Artificial Intelligence, 40(21), 17930-17938. https://doi.org/10.1609/aaai.v40i21.38852

Issue

Section

AAAI Technical Track on Humans and AI