Control-Oriented Model-Based Reinforcement Learning with Implicit Differentiation

Authors

  • Evgenii Nikishin Mila Université de Montréal
  • Romina Abachi Vector Institute University of Toronto
  • Rishabh Agarwal Google Research, Brain Team Mila Université de Montréal
  • Pierre-Luc Bacon Mila Université de Montréal Facebook CIFAR AI Chair

DOI:

https://doi.org/10.1609/aaai.v36i7.20758

Keywords:

Machine Learning (ML), Intelligent Robotics (ROB), Reasoning Under Uncertainty (RU), Search And Optimization (SO)

Abstract

The shortcomings of maximum likelihood estimation in the context of model-based reinforcement learning have been highlighted by an increasing number of papers. When the model class is misspecified or has a limited representational capacity, model parameters with high likelihood might not necessarily result in high performance of the agent on a downstream control task. To alleviate this problem, we propose an end-to-end approach for model learning which directly optimizes the expected returns using implicit differentiation. We treat a value function that satisfies the Bellman optimality operator induced by the model as an implicit function of model parameters and show how to differentiate the function. We provide theoretical and empirical evidence highlighting the benefits of our approach in the model misspecification regime compared to likelihood-based methods.

Downloads

Published

2022-06-28

How to Cite

Nikishin, E., Abachi, R., Agarwal, R., & Bacon, P.-L. (2022). Control-Oriented Model-Based Reinforcement Learning with Implicit Differentiation. Proceedings of the AAAI Conference on Artificial Intelligence, 36(7), 7886-7894. https://doi.org/10.1609/aaai.v36i7.20758

Issue

Section

AAAI Technical Track on Machine Learning II