Bounded Optimal Exploration in MDP

Authors

  • Kenji Kawaguchi Massachusetts Institute of Technology

DOI:

https://doi.org/10.1609/aaai.v30i1.10230

Keywords:

Learning, Exploration, Markov Decision Process

Abstract

Within the framework of probably approximately correct Markov decision processes (PAC-MDP), much theoretical work has focused on methods to attain near optimality after a relatively long period of learning and exploration. However, practical concerns require the attainment of satisfactory behavior within a short period of time. In this paper, we relax the PAC-MDP conditions to reconcile theoretically driven exploration methods and practical needs. We propose simple algorithms for discrete and continuous state spaces, and illustrate the benefits of our proposed relaxation via theoretical analyses and numerical examples. Our algorithms also maintain anytime error bounds and average loss bounds. Our approach accommodates both Bayesian and non-Bayesian methods.

Downloads

Published

2016-02-21

How to Cite

Kawaguchi, K. (2016). Bounded Optimal Exploration in MDP. Proceedings of the AAAI Conference on Artificial Intelligence, 30(1). https://doi.org/10.1609/aaai.v30i1.10230

Issue

Section

Technical Papers: Machine Learning Methods