Aligning Language Models Using Follow-up Likelihood as Reward Signal

Authors

  • Chen Zhang National University of Singapore
  • Dading Chong Peking University
  • Feng Jiang The Chinese University of Hong Kong, Shenzhen Shenzhen Research Institute of Big Data University of Science and Technology of China
  • Chengguang Tang Tencent AI Lab
  • Anningzhe Gao Shenzhen Research Institute of Big Data
  • Guohua Tang Tencent AI Lab
  • Haizhou Li The Chinese University of Hong Kong, Shenzhen National University of Singapore Shenzhen Research Institute of Big Data

DOI:

https://doi.org/10.1609/aaai.v39i24.34776

Abstract

In natural human-to-human conversations, participants often receive feedback signals from one another based on their follow-up reactions. These reactions can include verbal responses, facial expressions, changes in emotional state, and other non-verbal cues. Similarly, in human-machine interactions, the machine can leverage the user's follow-up utterances as feedback signals to assess whether it has appropriately addressed the user's request. Therefore, we propose using the likelihood of follow-up utterances as rewards to differentiate preferred responses from less favored ones, without relying on human or commercial LLM-based preference annotations. Our proposed reward mechanism, ``Follow-up Likelihood as Reward" (FLR), matches the performance of strong reward models trained on large-scale human or GPT-4 annotated data on 8 pairwise-preference and 4 rating-based benchmarks. Building upon the FLR mechanism, we propose to automatically mine preference data from the online generations of a base policy model. The preference data are subsequently used to boost the helpfulness of the base model through direct alignment from preference (DAP) methods, such as direct preference optimization (DPO). Lastly, we demonstrate that fine-tuning the language model that provides follow-up likelihood with natural language feedback significantly enhances FLR's performance on reward modeling benchmarks and effectiveness in aligning the base policy model's helpfulness.

Published

2025-04-11

How to Cite

Zhang, C., Chong, D., Jiang, F., Tang, C., Gao, A., Tang, G., & Li, H. (2025). Aligning Language Models Using Follow-up Likelihood as Reward Signal. Proceedings of the AAAI Conference on Artificial Intelligence, 39(24), 25832-25841. https://doi.org/10.1609/aaai.v39i24.34776

Issue

Section

AAAI Technical Track on Natural Language Processing III