Long-form RewardBench: Evaluating Reward Models for Long-form Generation

Authors

  • Hui Huang Harbin Institute of Technology
  • Yancheng He Harbin Institute of Technology
  • Wei Liu Harbin Institute of Technology
  • Muyun Yang Harbin Institute of Technology
  • Jiaheng Liu Nanjing University
  • Kehai Chen Harbin Institute of Technology (Shenzhen)
  • Bing Xu Harbin Institute of Technology
  • Conghui Zhu Harbin Institute of Technology
  • Hailong Cao Harbin Institute of Technology
  • Tiejun Zhao Harbin Institute of Technology

DOI:

https://doi.org/10.1609/aaai.v40i37.40376

Abstract

The widespread adoption of reinforcement learning-based alignment highlights the growing importance of reward models. Various benchmarks have been built to evaluate reward models in various domains and scenarios. However, a significant gap remains in assessing reward models for long-form generation, despite its critical role in real-world applications. To bridge this, we introduce Long-form RewardBench, the first reward modeling testbed specifically designed for long-form generation. Our benchmark encompasses five key subtasks: QA, RAG, Chat, Writing, and Reasoning. We collected instruction and preference data through a meticulously designed multi-stage data collection process, and conducted extensive experiments on 20+ mainstream reward models, including both classifiers and generative models. Our findings reveal that current models still lack long-form reward modeling capabilities. Furthermore, we designed a novel Long-form Needle-in-a-Haystack Test, which revealed a correlation between reward modeling performance and the error's position within a response, as well as the overall response length, with distinct characteristics observed between classification and generative models. Finally, we demonstrate that classifier exhibit better generalizability compared to generative models trained on the same data. As the first benchmark for long-form reward modeling, this work aims to offer a robust platform for visualizing progress in this crucial area.

Downloads

Published

2026-03-14

How to Cite

Huang, H., He, Y., Liu, W., Yang, M., Liu, J., Chen, K., Xu, B., Zhu, C., Cao, H., & Zhao, T. (2026). Long-form RewardBench: Evaluating Reward Models for Long-form Generation. Proceedings of the AAAI Conference on Artificial Intelligence, 40(37), 31149-31157. https://doi.org/10.1609/aaai.v40i37.40376

Issue

Section

AAAI Technical Track on Natural Language Processing II