DARL: Distance-Aware Uncertainty Estimation for Offline Reinforcement Learning

Authors

  • Hongchang Zhang Tsinghua University
  • Jianzhun Shao Tsinghua University
  • Shuncheng He Tsinghua University
  • Yuhang Jiang Tsinghua University
  • Xiangyang Ji Tsinghua University

DOI:

https://doi.org/10.1609/aaai.v37i9.26327

Keywords:

ML: Reinforcement Learning Algorithms

Abstract

To facilitate offline reinforcement learning, uncertainty estimation is commonly used to detect out-of-distribution data. By inspecting, we show that current explicit uncertainty estimators such as Monte Carlo Dropout and model ensemble are not competent to provide trustworthy uncertainty estimation in offline reinforcement learning. Accordingly, we propose a non-parametric distance-aware uncertainty estimator which is sensitive to the change in the input space for offline reinforcement learning. Based on our new estimator, adaptive truncated quantile critics are proposed to underestimate the out-of-distribution samples. We show that the proposed distance-aware uncertainty estimator is able to offer better uncertainty estimation compared to previous methods. Experimental results demonstrate that our proposed DARL method is competitive to the state-of-the-art methods in offline evaluation tasks.

Downloads

Published

2023-06-26

How to Cite

Zhang, H., Shao, J., He, S., Jiang, Y., & Ji, X. (2023). DARL: Distance-Aware Uncertainty Estimation for Offline Reinforcement Learning. Proceedings of the AAAI Conference on Artificial Intelligence, 37(9), 11210-11218. https://doi.org/10.1609/aaai.v37i9.26327

Issue

Section

AAAI Technical Track on Machine Learning IV