Pareto Set Learning for Multi-Objective Reinforcement Learning

Authors

  • Erlong Liu National Key Laboratory for Novel Software Technology, Nanjing University, Nanjing 210023, China
  • Yu-Chang Wu National Key Laboratory for Novel Software Technology, Nanjing University, Nanjing 210023, China
  • Xiaobin Huang National Key Laboratory for Novel Software Technology, Nanjing University, Nanjing 210023, China
  • Chengrui Gao National Key Laboratory for Novel Software Technology, Nanjing University, Nanjing 210023, China
  • Ren-Jian Wang National Key Laboratory for Novel Software Technology, Nanjing University, Nanjing 210023, China
  • Ke Xue National Key Laboratory for Novel Software Technology, Nanjing University, Nanjing 210023, China
  • Chao Qian School of Artificial Intelligence, Nanjing University, Nanjing 210023, China

DOI:

https://doi.org/10.1609/aaai.v39i18.34068

Abstract

Multi-objective decision-making problems have emerged in numerous real-world scenarios, such as video games, navigation and robotics. Considering the clear advantages of Reinforcement Learning (RL) in optimizing decision-making processes, researchers have delved into the development of Multi-Objective RL (MORL) methods for solving multi-objective decision problems. However, previous methods either cannot obtain the entire Pareto front, or employ only a single policy network for all the preferences over multiple objectives, which may not produce personalized solutions for each preference. To address these limitations, we propose a novel decomposition-based framework for MORL, Pareto Set Learning for MORL (PSL-MORL), that harnesses the generation capability of hypernetwork to produce the parameters of the policy network for each decomposition weight, generating relatively distinct policies for various scalarized subproblems with high efficiency. PSL-MORL is a general framework, which is compatible for any RL algorithm. The theoretical result guarantees the superiority of the model capacity of PSL-MORL and the optimality of the obtained policy network. Through extensive experiments on diverse benchmarks, we demonstrate the effectiveness of PSL-MORL in achieving dense coverage of the Pareto front, significantly outperforming state-of-the-art MORL methods in both the hypervolume and sparsity indicators.

Downloads

Published

2025-04-11

How to Cite

Liu, E., Wu, Y.-C., Huang, X., Gao, C., Wang, R.-J., Xue, K., & Qian, C. (2025). Pareto Set Learning for Multi-Objective Reinforcement Learning. Proceedings of the AAAI Conference on Artificial Intelligence, 39(18), 18789-18797. https://doi.org/10.1609/aaai.v39i18.34068

Issue

Section

AAAI Technical Track on Machine Learning IV