What, Whether and How? Unveiling Process Reward Models for Thinking with Images Reasoning

Authors

  • Yujin Zhou The Hong Kong University of Science and Technology
  • Pengcheng Wen The Hong Kong University of Science and Technology
  • Jiale Chen Sun Yat-sen University
  • Boqin Yin The Hong Kong University of Science and Technology
  • Han Zhu The Hong Kong University of Science and Technology
  • Jiaming Ji Peking University
  • Juntao Dai Peking University
  • Chi-Min Chan The Hong Kong University of Science and Technology
  • Sirui Han The Hong Kong University of Science and Technology

DOI:

https://doi.org/10.1609/aaai.v40i34.40144

Abstract

The rapid advancement of Large Vision Language Models (LVLMs) has demonstrated excellent abilities in various visual tasks. Building upon these developments, the thinking with images paradigm has emerged, enabling models to dynamically edit and re-encode visual information at each reasoning step, mirroring human visual processing. However, this paradigm introduces significant challenges as diverse errors may occur during reasoning processes. This necessitates Process Reward Models (PRMs) for distinguishing positive and negative reasoning steps, yet existing benchmarks for PRMs are predominantly text-centric and lack comprehensive assessment under this paradigm. To address these gaps, this work introduces the first comprehensive benchmark specifically designed for evaluating PRMs under the thinking with images paradigm. Our main contributions are: (1) Through extensive analysis of reasoning trajectories and guided search experiments with PRMs, we define 7 fine-grained error types and demonstrate both the necessity for specialized PRMs and the potential for improvement. (2) We construct a comprehensive benchmark comprising 1,206 manually annotated thinking with images reasoning trajectories spanning 4 categories and 16 subcategories for fine-grained evaluation of PRMs. (3) Our experimental analysis reveals that current LVLMs fall short as effective PRMs, exhibiting limited capabilities in visual reasoning process evaluation with significant performance disparities across error types, positive evaluation bias, and sensitivity to reasoning step positions. These findings demonstrate the effectiveness of our benchmark and establish crucial foundations for advancing PRMs in LVLMs.

Downloads

Published

2026-03-14

How to Cite

Zhou, Y., Wen, P., Chen, J., Yin, B., Zhu, H., Ji, J., … Han, S. (2026). What, Whether and How? Unveiling Process Reward Models for Thinking with Images Reasoning. Proceedings of the AAAI Conference on Artificial Intelligence, 40(34), 29071–29079. https://doi.org/10.1609/aaai.v40i34.40144

Issue

Section

AAAI Technical Track on Machine Learning XI