PA2D-MORL: Pareto Ascent Directional Decomposition Based Multi-Objective Reinforcement Learning
DOI:
https://doi.org/10.1609/aaai.v38i11.29148Keywords:
ML: Reinforcement Learning, ML: Deep Learning AlgorithmsAbstract
Multi-objective reinforcement learning (MORL) provides an effective solution for decision-making problems involving conflicting objectives. However, achieving high-quality approximations to the Pareto policy set remains challenging, especially in complex tasks with continuous or high-dimensional state-action space. In this paper, we propose the Pareto Ascent Directional Decomposition based Multi-Objective Reinforcement Learning (PA2D-MORL) method, which constructs an efficient scheme for multi-objective problem decomposition and policy improvement, leading to a superior approximation of Pareto policy set. The proposed method leverages Pareto ascent direction to select the scalarization weights and computes the multi-objective policy gradient, which determines the policy optimization direction and ensures joint improvement on all objectives. Meanwhile, multiple policies are selectively optimized under an evolutionary framework to approximate the Pareto frontier from different directions. Additionally, a Pareto adaptive fine-tuning approach is applied to enhance the density and spread of the Pareto frontier approximation. Experiments on various multi-objective robot control tasks show that the proposed method clearly outperforms the current state-of-the-art algorithm in terms of both quality and stability of the outcomes.Downloads
Published
2024-03-24
How to Cite
Hu, T., & Luo, B. (2024). PA2D-MORL: Pareto Ascent Directional Decomposition Based Multi-Objective Reinforcement Learning. Proceedings of the AAAI Conference on Artificial Intelligence, 38(11), 12547–12555. https://doi.org/10.1609/aaai.v38i11.29148
Issue
Section
AAAI Technical Track on Machine Learning II