Probabilistic Offline Policy Ranking with Approximate Bayesian Computation

Authors

  • Longchao Da Arizona State University
  • Porter Jenkins Brigham Young University
  • Trevor Schwantes Brigham Young University
  • Jeffrey Dotson Brigham Young University
  • Hua Wei Arizona State University

DOI:

https://doi.org/10.1609/aaai.v38i18.30019

Keywords:

RU: Probabilistic Inference, ML: Bayesian Learning, ML: Reinforcement Learning, RU: Uncertainty Representations

Abstract

In practice, it is essential to compare and rank candidate policies offline before real-world deployment for safety and reliability. Prior work seeks to solve this offline policy ranking (OPR) problem through value-based methods, such as Off-policy evaluation (OPE). However, they fail to analyze special case performance (e.g., worst or best cases), due to the lack of holistic characterization of policies’ performance. It is even more difficult to estimate precise policy values when the reward is not fully accessible under sparse settings. In this paper, we present Probabilistic Offline Policy Ranking (POPR), a framework to address OPR problems by leveraging expert data to characterize the probability of a candidate policy behaving like experts, and approximating its entire performance posterior distribution to help with ranking. POPR does not rely on value estimation, and the derived performance posterior can be used to distinguish candidates in worst-, best-, and average-cases. To estimate the posterior, we propose POPR-EABC, an Energy-based Approximate Bayesian Computation (ABC) method conducting likelihood-free inference. POPR-EABC reduces the heuristic nature of ABC by a smooth energy function, and improves the sampling efficiency by a pseudo-likelihood. We empirically demonstrate that POPR-EABC is adequate for evaluating policies in both discrete and continuous action spaces across various experiment environments, and facilitates probabilistic comparisons of candidate policies before deployment.

Published

2024-03-24

How to Cite

Da, L., Jenkins, P., Schwantes, T., Dotson, J., & Wei, H. (2024). Probabilistic Offline Policy Ranking with Approximate Bayesian Computation. Proceedings of the AAAI Conference on Artificial Intelligence, 38(18), 20370-20378. https://doi.org/10.1609/aaai.v38i18.30019

Issue

Section

AAAI Technical Track on Reasoning under Uncertainty