Tighter Value Function Bounds for Bayesian Reinforcement Learning

Authors

  • Kanghoon Lee KAIST
  • Kee-Eung Kim KAIST

DOI:

https://doi.org/10.1609/aaai.v29i1.9700

Abstract

Bayesian reinforcement learning (BRL) provides a principled framework for optimal exploration-exploitation tradeoff in reinforcement learning. We focus on model based BRL, which involves a compact formulation of the optimal tradeoff from the Bayesian perspective. However, it still remains a computational challenge to compute the Bayes-optimal policy. In this paper, we propose a novel approach to compute tighter value function bounds of the Bayes-optimal value function, which is crucial for improving the performance of many model-based BRL algorithms. We then present how our bounds can be integrated into real-time AO* heuristic search, and provide a theoretical analysis on the impact of improved bounds on the search efficiency. We also provide empirical results on standard BRL domains that demonstrate the effectiveness of our approach.

Downloads

Published

2015-03-04

How to Cite

Lee, K., & Kim, K.-E. (2015). Tighter Value Function Bounds for Bayesian Reinforcement Learning. Proceedings of the AAAI Conference on Artificial Intelligence, 29(1). https://doi.org/10.1609/aaai.v29i1.9700

Issue

Section

AAAI Technical Track: Reasoning under Uncertainty