Robust Multi-Objective Preference Alignment with Online DPO

Authors

  • Raghav Gupta Google DeepMind
  • Ryan Sullivan University of Maryland, College Park
  • Yunxuan Li Google
  • Samrat Phatale Google DeepMind
  • Abhinav Rastogi Google DeepMind

DOI:

https://doi.org/10.1609/aaai.v39i26.34942

Abstract

Multi-objective preference alignment of large language models (LLMs) is critical for developing AI systems that are more configurable, personalizable, helpful, and safe. However, optimizing model outputs to satisfy diverse objectives with variable weights at inference time for truly personalized models presents a significant challenge. Existing approaches are either computationally expensive to train or do not sufficiently steer model behaviors. This paper introduces the Multi-Objective Online DPO (MO-ODPO) algorithm, designed to robustly and efficiently align model behaviors with multiple, potentially conflicting human preferences. Our approach incorporates a prompt conditioning mechanism, allowing us to train a single preference-conditional policy, that can adapt to new preference combinations at inference. Experiments on two popular benchmarks show that MO-ODPO Pareto-dominates existing baselines while providing excellent inference-time steerability between diverse objectives.

Downloads

Published

2025-04-11

How to Cite

Gupta, R., Sullivan, R., Li, Y., Phatale, S., & Rastogi, A. (2025). Robust Multi-Objective Preference Alignment with Online DPO. Proceedings of the AAAI Conference on Artificial Intelligence, 39(26), 27321-27329. https://doi.org/10.1609/aaai.v39i26.34942

Issue

Section

AAAI Technical Track on AI Alignment