P2S: Probabilistic Process Supervision for General-Domain Reasoning Question Answering

Authors

  • Wenlin Zhong School of Software Technology, Zhejiang Unirersity
  • Chengyuan Liu College of Computer Science and Technology, Zhejiang Unirersity
  • Yiquan Wu Guanghua Law School, Zhejiang University
  • Bovin Tan Guanghua Law School, Zhejiang University
  • Changlong Sun Guanghua Law School, Zhejiang University
  • Yi Wang Chongqing Ant Consumer Finance Co,. Ltd , Ant Group
  • Xiaozhong Liu Worcester Polytechnic Institute, Worcester, USA
  • Kun Kuang College of Computer Science and Technology, Zhejiang Unirersity

DOI:

https://doi.org/10.1609/aaai.v40i41.40813

Abstract

While reinforcement learning with verifiable rewards (RLVR) has advanced LLM reasoning in structured domains like mathematics and programming, its application to general-domain reasoning tasks remains challenging due to the absence of verifiable reward signals. To this end, methods like Reinforcement Learning with Reference Probability Reward (RLPR) have emerged, leveraging the probability of generating the final answer as a reward signal. However, these outcome-focused approaches neglect crucial step-by-step supervision of the reasoning process itself. To address this gap, we introduce Probabilistic Process Supervision (P2S), a novel self-supervision framework that provides fine-grained process rewards without requiring a separate reward model or human-annotated reasoning steps. During reinforcement learning, P2S synthesizes and filters a high-quality reference reasoning chain (gold-CoT). The core of our method is to calculate a Path Faithfulness Reward (PFR) for each reasoning step, which is derived from the conditional probability of generating the gold-CoT's suffix, given the model's current reasoning prefix. Crucially, this PFR can be flexibly integrated with any outcome-based reward, directly tackling the reward sparsity problem by providing dense guidance. Extensive experiments on reading comprehension and medical Question Answering benchmarks show that P2S significantly outperforms strong baselines.

Downloads

Published

2026-03-14

How to Cite

Zhong, W., Liu, C., Wu, Y., Tan, B., Sun, C., Wang, Y., … Kuang, K. (2026). P2S: Probabilistic Process Supervision for General-Domain Reasoning Question Answering. Proceedings of the AAAI Conference on Artificial Intelligence, 40(41), 35076–35084. https://doi.org/10.1609/aaai.v40i41.40813

Issue

Section

AAAI Technical Track on Natural Language Processing VI