Gradient-Aware Model-Based Policy Search

Authors

  • Pierluca D'Oro Politecnico di Milano
  • Alberto Maria Metelli Politecnico di Milano
  • Andrea Tirinzoni Politecnico di Milano
  • Matteo Papini Politecnico di Milano
  • Marcello Restelli Politecnico di Milano

DOI:

https://doi.org/10.1609/aaai.v34i04.5791

Abstract

Traditional model-based reinforcement learning approaches learn a model of the environment dynamics without explicitly considering how it will be used by the agent. In the presence of misspecified model classes, this can lead to poor estimates, as some relevant available information is ignored. In this paper, we introduce a novel model-based policy search approach that exploits the knowledge of the current agent policy to learn an approximate transition model, focusing on the portions of the environment that are most relevant for policy improvement. We leverage a weighting scheme, derived from the minimization of the error on the model-based policy gradient estimator, in order to define a suitable objective function that is optimized for learning the approximate transition model. Then, we integrate this procedure into a batch policy improvement algorithm, named Gradient-Aware Model-based Policy Search (GAMPS), which iteratively learns a transition model and uses it, together with the collected trajectories, to compute the new policy parameters. Finally, we empirically validate GAMPS on benchmark domains analyzing and discussing its properties.

Downloads

Published

2020-04-03

How to Cite

D’Oro, P., Metelli, A. M., Tirinzoni, A., Papini, M., & Restelli, M. (2020). Gradient-Aware Model-Based Policy Search. Proceedings of the AAAI Conference on Artificial Intelligence, 34(04), 3801-3808. https://doi.org/10.1609/aaai.v34i04.5791

Issue

Section

AAAI Technical Track: Machine Learning