VQAThinker: Exploring Generalizable and Explainable Video Quality Assessment via Reinforcement Learning

Authors

  • Linhan Cao Shanghai Jiaotong University
  • Wei Sun East China Normal University
  • Weixia Zhang Shanghai Jiao Tong University
  • Xiangyang Zhu Shanghai Artificial Intelligence Laboratory
  • Jun Jia Shanghai Jiao Tong University
  • Kaiwei Zhang Shanghai Jiaotong University
  • Dandan Zhu East China Normal University
  • Guangtao Zhai Shanghai Jiao Tong University
  • Xiongkuo Min Shanghai Jiao Tong University

DOI:

https://doi.org/10.1609/aaai.v40i4.37248

Abstract

Video quality assessment (VQA) aims to objectively quantify perceptual quality degradation in alignment with human visual perception. Despite recent advances, existing VQA models still suffer from two critical limitations: poor generalization to out-of-distribution (OOD) videos and limited explainability, which restrict their applicability in real-world scenarios. To address these challenges, we propose VQAThinker, a reasoning-based VQA framework that leverages large multimodal models (LMMs) with reinforcement learning to jointly model video quality understanding and scoring, emulating human perceptual decision-making. Specifically, we adopt group relative policy optimization (GRPO), a rule-guided reinforcement learning algorithm that enables reasoning over video quality under score-level supervision, and introduce three VQA-specific rewards: (1) a bell-shaped regression reward that increases rapidly as the prediction error decreases and becomes progressively less sensitive near the ground truth; (2) a pairwise ranking reward that guides the model to correctly determine the relative quality between video pairs; and (3) a temporal consistency reward that encourages the model to prefer temporally coherent videos over their perturbed counterparts. Extensive experiments demonstrate that VQAThinker achieves state-of-the-art performance on both in-domain and OOD VQA benchmarks, showing strong generalization for video quality scoring. Furthermore, evaluations on video quality understanding tasks validate its superiority in distortion attribution and quality description compared to existing explainable VQA models and LMMs. These findings demonstrate that reinforcement learning offers an effective pathway toward building generalizable and explainable VQA models solely with score-level supervision.

Downloads

Published

2026-03-14

How to Cite

Cao, L., Sun, W., Zhang, W., Zhu, X., Jia, J., Zhang, K., Zhu, D., Zhai, G., & Min, X. (2026). VQAThinker: Exploring Generalizable and Explainable Video Quality Assessment via Reinforcement Learning. Proceedings of the AAAI Conference on Artificial Intelligence, 40(4), 2607-2615. https://doi.org/10.1609/aaai.v40i4.37248

Issue

Section

AAAI Technical Track on Computer Vision I