On MABs and Separation of Concerns in Monte-Carlo Planning for MDPs
DOI:
https://doi.org/10.1609/icaps.v24i1.13631Keywords:
Markov Decision Process, Multi-Armed Bandit, Online Planning, Simple Regret, Monte-Carlo Tree SearchAbstract
Linking online planning for MDPs with their special case of stochastic multi-armed bandit problems, we analyze three state-of-the-art Monte-Carlo tree search al-gorithms: UCT, BRUE, and MaxUCT. Using the outcome, we (i) introduce two new MCTS algorithms,MaxBRUE, which combines uniform sampling with Bellman backups, and MpaUCT, which combines UCB1with a novel backup procedure, (ii) analyze them formally and empirically, and (iii) show how MCTS algorithms can be further stratified by an exploration control mechanism that improves their empirical performance without harming the formal guarantees.