R-AVST: Empowering Video-LLMs with Fine-Grained Spatio-Temporal Reasoning in Complex Audio-Visual Scenarios

Authors

  • Zhu Lu Southern University of Science and Technology Spatialtemporal AI
  • Tiantian Geng Southern University of Science and Technology University of Birmingham
  • Yangye Chen Southern University of Science and Technology
  • Teng Wang Southern University of Science and Technology The University of Hong Kong
  • Ping Lu ZTE Corporation
  • Feng Zheng Southern University of Science and Technology Spatialtemporal AI

DOI:

https://doi.org/10.1609/aaai.v40i9.37704

Abstract

Recently, rapid advancements have been made in multimodal large language models (MLLMs), especially in video understanding tasks. However, current research focuses on simple video scenarios, failing to reflect the complex and diverse nature of real-world audio-visual events in videos. To bridge this gap, we firstly introduce R-AVST, a dataset for audio-visual reasoning featuring fine-grained spatio-temporal annotations. In constructing this, we design a pipeline consisting of LLM-based key object extraction, automatic spatial annotation and manual quality inspection, resulting in over 5K untrimmed videos with 27K objects across 100 types of audio-visual events. Building on this dataset, we define three core tasks for spatio-temporal reasoning in audio-visual scenes and generate more than 8K high-quality, evenly distributed question-answer pairs to effectively benchmark model performance. To further enhance reasoning, we propose AVST-Zero, a reinforcement learning-based model that avoids intermediate supervision, directly optimizing behavior via carefully designed multi-dimensional rewards. Extensive experiments validate the effectiveness of our R-AVST in advancing audio-visual spatio-temporal reasoning, upon which AVST-Zero demonstrates competitive performance compared to existing models. To the best of our knowledge, R-AVST is the first dataset designed for real-world audio-visual spatio-temporal reasoning, and AVST-Zero offers a novel perspective for tackling future challenges in this domain.

Downloads

Published

2026-03-14

How to Cite

Lu, Z., Geng, T., Chen, Y., Wang, T., Lu, P., & Zheng, F. (2026). R-AVST: Empowering Video-LLMs with Fine-Grained Spatio-Temporal Reasoning in Complex Audio-Visual Scenarios. Proceedings of the AAAI Conference on Artificial Intelligence, 40(9), 7627–7635. https://doi.org/10.1609/aaai.v40i9.37704

Issue

Section

AAAI Technical Track on Computer Vision VI