Multi-Objective Reinforcement Learning with Continuous Pareto Frontier Approximation

Authors

  • Matteo Pirotta Politecnico di Milano
  • Simone Parisi Politecnico di Milano
  • Marcello Restelli Politecnico di Milano

DOI:

https://doi.org/10.1609/aaai.v29i1.9617

Keywords:

Reinforcement Learning, MDP

Abstract

This paper is about learning a continuous approximation of the Pareto frontier in Multi-Objective Markov Decision Problems (MOMDPs).We propose a policy-based approach that exploits gradient information to generate solutions close to the Pareto ones.Differently from previous policy-gradient multi-objective algorithms, where n optimization routines are used to have n solutions, our approach performs a single gradient-ascent run that at each step generates an improved continuous approximation of the Pareto frontier.The idea is to exploit a gradient-based approach to optimize the parameters of a function that defines a manifold in the policy parameter space so that the corresponding image in the objective space gets as close as possible to the Pareto frontier.Besides deriving how to compute and estimate such gradient, we will also discuss the non-trivial issue of defining a metric to assess the quality of the candidate Pareto frontiers.Finally, the properties of the proposed approach are empirically evaluated on two interesting MOMDPs.

Downloads

Published

2015-02-21

How to Cite

Pirotta, M., Parisi, S., & Restelli, M. (2015). Multi-Objective Reinforcement Learning with Continuous Pareto Frontier Approximation. Proceedings of the AAAI Conference on Artificial Intelligence, 29(1). https://doi.org/10.1609/aaai.v29i1.9617

Issue

Section

Main Track: Novel Machine Learning Algorithms