AlphaSnake: Policy Iteration on a Nondeterministic NP-Hard Markov Decision Process (Student Abstract)

Authors

  • Kevin Du Harvard University, Cambridge, U.S.
  • Ian Gemp DeepMind, London, U.K.
  • Yi Wu Institute for Interdisciplinary Information Sciences, Tsinghua University
  • Yingying Wu Center of Mathematical Sciences and Applications, Harvard University, Cambridge, U.S. University of Houston, Department of Mathematics, Houston, U.S.

DOI:

https://doi.org/10.1609/aaai.v37i13.26962

Keywords:

Reinforcement Learning, Planning With Markov Models, Stochastic Optimization, NP-hardness

Abstract

Reinforcement learning has been used to approach well-known NP-hard combinatorial problems in graph theory. Among these, Hamiltonian cycle problems are exceptionally difficult to analyze, even when restricted to individual instances of structurally complex graphs. In this paper, we use Monte Carlo Tree Search (MCTS), the search algorithm behind many state-of-the-art reinforcement learning algorithms such as AlphaZero, to create autonomous agents that learn to play the game of Snake, a game centered on properties of Hamiltonian cycles on grid graphs. The game of Snake can be formulated as a single-player discounted Markov Decision Process (MDP), where the agent must behave optimally in a stochastic environment. Determining the optimal policy for Snake, defined as the policy that maximizes the probability of winning -- or win rate -- with higher priority and minimizes the expected number of time steps to win with lower priority, is conjectured to be NP-hard. Performance-wise, compared to prior work in the Snake game, our algorithm is the first to achieve a win rate over 0.5 (a uniform random policy achieves a win rate < 2.57 x 10^{-15}), demonstrating the versatility of AlphaZero in tackling NP-hard problems.

Downloads

Published

2024-07-15

How to Cite

Du, K., Gemp, I., Wu, Y., & Wu, Y. (2024). AlphaSnake: Policy Iteration on a Nondeterministic NP-Hard Markov Decision Process (Student Abstract). Proceedings of the AAAI Conference on Artificial Intelligence, 37(13), 16204-16205. https://doi.org/10.1609/aaai.v37i13.26962