RoadSceneVQA: Benchmarking Visual Question Answering in Roadside Perception Systems for Intelligent Transportation System

Authors

  • Runwei Guan Information Hub, The Hong Kong University of Science and Technology (Guangzhou) Institute of Deep Perception Technology, JITRI
  • Rongsheng Hu School of Artificial Intelligence and Computer Science, Jiangnan University
  • Shangshu Chen National Research Center of Cultural Industries, Central China Normal University
  • Ningyuan Xiao Information Hub, The Hong Kong University of Science and Technology (Guangzhou)
  • Xue Xia Information Hub, The Hong Kong University of Science and Technology (Guangzhou)
  • Jiayang Liu Information Hub, The Hong Kong University of Science and Technology (Guangzhou)
  • Beibei Chen School of Marxism, Jilin University
  • Ziren Tang Information Hub, The Hong Kong University of Science and Technology (Guangzhou)
  • Ningwei Ouyang School of Advanced Technology, Xi'an Jiaotong-Liverpool University
  • Shaofeng Liang Information Hub, The Hong Kong University of Science and Technology (Guangzhou)
  • Yuxuan Fan Information Hub, The Hong Kong University of Science and Technology (Guangzhou)
  • Wanjie Sun School of Remote Sensing and Information Engineering, Wuhan University
  • Yutao Yue Information Hub, The Hong Kong University of Science and Technology (Guangzhou) Institute of Deep Perception Technology, JITRI

DOI:

https://doi.org/10.1609/aaai.v40i6.42434

Abstract

Current roadside perception systems mainly focus on instance-level perception, which fall short in enabling interaction via natural language and reasoning about traffic behaviors in context. To bridge this gap, we introduce RoadSceneVQA, a large-scale and richly annotated visual question answering (VQA) dataset specifically tailored for roadside scenarios. The dataset comprises 34,736 diverse QA pairs collected under varying weather, illumination, and traffic conditions, targeting not only object attributes but also the intent, legality, and interaction patterns of traffic participants. RoadSceneVQA challenges models to perform both explicit recognition and implicit commonsense reasoning, grounded in real-world traffic rules and contextual dependencies. To fully exploit the reasoning potential of Multi-modal Large Language Models (MLLMs), we further propose CogniAnchor Fusion (CAF), a vision-language fusion module inspired by human-like scene anchoring mechanisms. CAF enables precise and efficient cross-modal interaction. Moreover, we propose the Assisted Decoupled Chain-of-Thought (AD-CoT) to enhance the reasoned thinking via CoT prompting and multi-task learning. Experimental results on RoadSceneVQA and CODA-LM benchmark show that the pipeline consistently improves both reasoning accuracy and computational efficiency, allowing the MLLM to achieve state-of-the-art performance in structural traffic perception and reasoning tasks.

Downloads

Published

2026-03-14

How to Cite

Guan, R., Hu, R., Chen, S., Xiao, N., Xia, X., Liu, J., … Yue, Y. (2026). RoadSceneVQA: Benchmarking Visual Question Answering in Roadside Perception Systems for Intelligent Transportation System. Proceedings of the AAAI Conference on Artificial Intelligence, 40(6), 4366–4375. https://doi.org/10.1609/aaai.v40i6.42434

Issue

Section

AAAI Technical Track on Computer Vision III