High-Confidence Off-Policy (or Counterfactual) Variance Estimation

Authors

  • Yash Chandak University of Massachusetts
  • Shiv Shankar University of Massachusetts
  • Philip S. Thomas University of Massachusetts

DOI:

https://doi.org/10.1609/aaai.v35i8.16855

Keywords:

Reinforcement Learning, Causal Learning, Safety, Robustness & Trustworthiness

Abstract

Many sequential decision-making systems leverage data collected using prior policies to propose a new policy. For critical applications, it is important that high-confidence guarantees on the new policy’s behavior are provided before deployment, to ensure that the policy will behave as desired. Prior works have studied high-confidence off-policy estimation of the expected return, however, high-confidence off-policy estimation of the variance of returns can be equally critical for high-risk applications. In this paper we tackle the previously open problem of estimating and bounding, with high confidence, the variance of returns from off-policy data.

Downloads

Published

2021-05-18

How to Cite

Chandak, Y., Shankar, S., & Thomas, P. S. (2021). High-Confidence Off-Policy (or Counterfactual) Variance Estimation. Proceedings of the AAAI Conference on Artificial Intelligence, 35(8), 6939-6947. https://doi.org/10.1609/aaai.v35i8.16855

Issue

Section

AAAI Technical Track on Machine Learning I