Reward Model Evaluation via Automatically-Ranked Policy Alignment

Authors

  • Aoran Wang Nanjing University
  • Lei Ou Nanjing University
  • Yang Yu Nanjing University
  • Zongzhang Zhang Nanjing University

DOI:

https://doi.org/10.1609/aaai.v40i31.39815

Abstract

Evaluating reward models is a fundamental challenge in Reinforcement Learning (RL), particularly in settings where the reward model is learned or manually designed. The standard paradigm for Reward Model Evaluation (RME) involves training an optimal policy via RL on the given reward model and assessing model quality through the performance of the resulting policy. However, this approach conflates the quality of the reward model with the effectiveness of RL training, and is computationally expensive due to the need for policy optimization. Recent RME methods attempt to circumvent this issue by evaluating reward models directly, without RL, but often rely on impractical assumptions such as access to a ground-truth reward or fail to utilize available supervision in a fine-grained manner. To overcome these limitations, we propose the Policy Preference Alignment Coefficient (PPAC), a novel metric for RME that requires neither RL training nor ground-truth rewards. PPAC first generates a sequence of automatically ranked policy preferences that guarantee monotonic improvement in the policy value, and then quantifies the alignment between these generated preferences and those implied by the candidate reward model. Experimental results across gridworld and continuous control task demonstrate that PPAC yields preference sequences with consistently increasing policy values and outperforms existing metrics in evaluating reward model quality.

Published

2026-03-14

How to Cite

Wang, A., Ou, L., Yu, Y., & Zhang, Z. (2026). Reward Model Evaluation via Automatically-Ranked Policy Alignment. Proceedings of the AAAI Conference on Artificial Intelligence, 40(31), 26124–26132. https://doi.org/10.1609/aaai.v40i31.39815

Issue

Section

AAAI Technical Track on Machine Learning VIII