Sequential Preference Optimization: Multi-Dimensional Preference Alignment with Implicit Reward Modeling
DOI:
https://doi.org/10.1609/aaai.v39i26.34963Abstract
Human preference alignment is critical in building powerful and reliable large language models (LLMs). However, current methods either ignore the multi-dimensionality of human preferences (e.g. helpfulness and harmlessness) or struggle with the complexity of managing multiple reward models. To address these issues, we propose Sequential Preference Optimization (SPO), a method that sequentially fine-tunes LLMs to align with multiple dimensions of human preferences. SPO avoids explicit reward modeling, directly optimizing the models to align with nuanced human preferences. We theoretically derive closed-form optimal SPO policy and loss function. Gradient analysis is conducted to show how SPO manages to fine-tune the LLMs while maintaining alignment on previously optimized dimensions. Empirical results on LLMs of different size and multiple evaluation datasets demonstrate that SPO successfully aligns LLMs across multiple dimensions of human preferences and significantly outperforms the baselines.Published
2025-04-11
How to Cite
Lou, X., Zhang, J., Xie, J., Liu, L., Yan, D., & Huang, K. (2025). Sequential Preference Optimization: Multi-Dimensional Preference Alignment with Implicit Reward Modeling. Proceedings of the AAAI Conference on Artificial Intelligence, 39(26), 27509–27517. https://doi.org/10.1609/aaai.v39i26.34963
Issue
Section
AAAI Technical Track on AI Alignment