Model-Based Offline Reinforcement Learning with Local Misspecification

Authors

  • Kefan Dong Stanford University
  • Yannis Flet-Berliac Stanford University
  • Allen Nie Stanford University
  • Emma Brunskill Stanford University

DOI:

https://doi.org/10.1609/aaai.v37i6.25903

Keywords:

ML: Reinforcement Learning Theory, ML: Reinforcement Learning Algorithms

Abstract

We present a model-based offline reinforcement learning policy performance lower bound that explicitly captures dynamics model misspecification and distribution mismatch and we propose an empirical algorithm for optimal offline policy selection. Theoretically, we prove a novel safe policy improvement theorem by establishing pessimism approximations to the value function. Our key insight is to jointly consider selecting over dynamics models and policies: as long as a dynamics model can accurately represent the dynamics of the state-action pairs visited by a given policy, it is possible to approximate the value of that particular policy. We analyze our lower bound in the LQR setting and also show competitive performance to previous lower bounds on policy selection across a set of D4RL tasks.

Downloads

Published

2023-06-26

How to Cite

Dong, K., Flet-Berliac, Y., Nie, A., & Brunskill, E. (2023). Model-Based Offline Reinforcement Learning with Local Misspecification. Proceedings of the AAAI Conference on Artificial Intelligence, 37(6), 7423-7431. https://doi.org/10.1609/aaai.v37i6.25903

Issue

Section

AAAI Technical Track on Machine Learning I