Learning to Stop: Dynamic Simulation Monte-Carlo Tree Search
Keywords:Games, Reinforcement Learning
AbstractMonte Carlo tree search (MCTS) has achieved state-of-the-art results in many domains such as Go and Atari games when combining with deep neural networks (DNNs). When more simulations are executed, MCTS can achieve higher performance but also requires enormous amounts of CPU and GPU resources. However, not all states require a long searching time to identify the best action that the agent can find. For example, in 19x19 Go and NoGo, we found that for more than half of the states, the best action predicted by DNN remains unchanged even after searching 2 minutes. This implies that a significant amount of resources can be saved if we are able to stop the searching earlier when we are confident with the current searching result. In this paper, we propose to achieve this goal by predicting the uncertainty of the current searching status and use the result to decide whether we should stop searching. With our algorithm, called Dynamic Simulation MCTS (DS-MCTS), we can speed up a NoGo agent trained by AlphaZero 2.5 times faster while maintaining a similar winning rate, which is critical for training and conducting experiments. Also, under the same average simulation count, our method can achieve a 61\% winning rate against the original program.
How to Cite
Lan, L.-C., Wu, T.-R., Wu, I.-C., & Hsieh, C.-J. (2021). Learning to Stop: Dynamic Simulation Monte-Carlo Tree Search. Proceedings of the AAAI Conference on Artificial Intelligence, 35(1), 259-267. Retrieved from https://ojs.aaai.org/index.php/AAAI/article/view/16100
AAAI Technical Track on Application Domains