An Implicit Trust Region Approach to Behavior Regularized Offline Reinforcement Learning

Authors

  • Zhe Zhang College of Computer Science and Technology, Nanjing University of Aeronautics and Astronautics MIIT Key Laboratory of Pattern Analysis and Machine Intelligence
  • Xiaoyang Tan College of Computer Science and Technology, Nanjing University of Aeronautics and Astronautics MIIT Key Laboratory of Pattern Analysis and Machine Intelligence

DOI:

https://doi.org/10.1609/aaai.v38i15.29637

Keywords:

ML: Reinforcement Learning

Abstract

We revisit behavior regularization, a popular approach to mitigate the extrapolation error in offline reinforcement learning (RL), showing that current behavior regularization may suffer from unstable learning and hinder policy improvement. Motivated by this, a novel reward shaping-based behavior regularization method is proposed, where the log-probability ratio between the learned policy and the behavior policy is monitored during learning. We show that this is equivalent to an implicit but computationally lightweight trust region mechanism, which is beneficial to mitigate the influence of estimation errors of the value function, leading to more stable performance improvement. Empirical results on the popular D4RL benchmark verify the effectiveness of the presented method with promising performance compared with some state-of-the-art offline RL algorithms.

Published

2024-03-24

How to Cite

Zhang, Z., & Tan, X. (2024). An Implicit Trust Region Approach to Behavior Regularized Offline Reinforcement Learning. Proceedings of the AAAI Conference on Artificial Intelligence, 38(15), 16944-16952. https://doi.org/10.1609/aaai.v38i15.29637

Issue

Section

AAAI Technical Track on Machine Learning VI