Multi-Reference Preference Optimization for Large Language Models

Authors

  • Hung Le Deakin University
  • Quan Hung Tran ServiceNow Research
  • Dung Nguyen Deakin University
  • Kien Do Deakin University
  • Saloni Mittal ServiceNow Research
  • Kelechi Ogueji ServiceNow Research
  • Svetha Venkatesh Deakin University

DOI:

https://doi.org/10.1609/aaai.v39i23.34615

Abstract

How can Large Language Models (LLMs) be aligned with human intentions and values? A typical solution is to gather human preference on model outputs and finetune the LLMs accordingly while ensuring that updates do not deviate too far from a reference model. Recent approaches, such as direct preference optimization (DPO), have eliminated the need for unstable and sluggish reinforcement learning optimization by introducing close-formed supervised losses. However, a significant limitation of the current approach is its design for a single reference model only, neglecting to leverage the collective power of numerous pretrained LLMs. To overcome this limitation, we introduce a novel closed-form formulation for direct preference optimization using multiple reference models. The resulting algorithm, Multi-Reference Preference Optimization (MRPO), leverages broader prior knowledge from diverse reference models, substantially enhancing preference learning capabilities compared to the single-reference DPO. Our experiments demonstrate that LLMs finetuned with MRPO generalize better in various preference data, regardless of data scarcity or abundance. Furthermore, MRPO effectively finetunes LLMs to exhibit superior performance in several downstream natural language processing tasks such as HH-RLHF, GSM8K and TruthfulQA.

Downloads

Published

2025-04-11

How to Cite

Le, H., Tran, Q. H., Nguyen, D., Do, K., Mittal, S., Ogueji, K., & Venkatesh, S. (2025). Multi-Reference Preference Optimization for Large Language Models. Proceedings of the AAAI Conference on Artificial Intelligence, 39(23), 24375–24383. https://doi.org/10.1609/aaai.v39i23.34615

Issue

Section

AAAI Technical Track on Natural Language Processing II