High-Confidence Off-Policy (or Counterfactual) Variance Estimation


  • Yash Chandak University of Massachusetts
  • Shiv Shankar University of Massachusetts
  • Philip S. Thomas University of Massachusetts


Reinforcement Learning, Causal Learning, Safety, Robustness & Trustworthiness


Many sequential decision-making systems leverage data collected using prior policies to propose a new policy. For critical applications, it is important that high-confidence guarantees on the new policy’s behavior are provided before deployment, to ensure that the policy will behave as desired. Prior works have studied high-confidence off-policy estimation of the expected return, however, high-confidence off-policy estimation of the variance of returns can be equally critical for high-risk applications. In this paper we tackle the previously open problem of estimating and bounding, with high confidence, the variance of returns from off-policy data.




How to Cite

Chandak, Y., Shankar, S., & Thomas, P. S. (2021). High-Confidence Off-Policy (or Counterfactual) Variance Estimation. Proceedings of the AAAI Conference on Artificial Intelligence, 35(8), 6939-6947. Retrieved from https://ojs.aaai.org/index.php/AAAI/article/view/16855



AAAI Technical Track on Machine Learning I