Scaling Up Reinforcement Learning through Targeted Exploration

Authors

  • Timothy Mann Texas A&M University
  • Yoonsuck Choe Texas A&M University

DOI:

https://doi.org/10.1609/aaai.v25i1.7929

Abstract

Recent Reinforcement Learning (RL) algorithms, such as R-MAX, make (with high probability) only a small number of poor decisions. In practice, these algorithms do not scale well as the number of states grows because the algorithms spend too much effort exploring. We introduce an RL algorithm State TArgeted R-MAX (STAR-MAX) that explores a subset of the state space, called the exploration envelope ξ. When ξ equals the total state space, STAR-MAX behaves identically to R-MAX. When ξ is a subset of the state space, to keep exploration within ξ, a recovery rule β is needed. We compared existing algorithms with our algorithm employing various exploration envelopes. With an appropriate choice of ξ, STAR-MAX scales far better than existing RL algorithms as the number of states increases. A possible drawback of our algorithm is its dependence on a good choice of ξ and β. However, we show that an effective recovery rule β can be learned on-line and ξ can be learned from demonstrations. We also find that even randomly sampled exploration envelopes can improve cumulative rewards compared to R-MAX. We expect these results to lead to more efficient methods for RL in large-scale problems.

Downloads

Published

2011-08-04

How to Cite

Mann, T., & Choe, Y. (2011). Scaling Up Reinforcement Learning through Targeted Exploration. Proceedings of the AAAI Conference on Artificial Intelligence, 25(1), 435-440. https://doi.org/10.1609/aaai.v25i1.7929

Issue

Section

AAAI Technical Track: Machine Learning