Approximate State Abstraction for Markov Games

Authors

  • Hiroki Ishibashi The University of Electro-Communications
  • Kenshi Abe CyberAgent The University of Electro-Communications
  • Atsushi Iwasaki The University of Electro-Communications

DOI:

https://doi.org/10.1609/aaai.v39i17.33930

Abstract

This paper introduces state abstraction for two-player zero-sum Markov games (TZMGs) where the payoffs for the two players are determined by the state representing the environment and their respective actions, with state transitions following a Markov decision processes. For example, in games like soccer, the value of actions changes according to the state of play, we should describe them as Markov games. In TZMGs, the more the number of states becomes, the more difficult computing the equilibrium becomes. Therefore, we abstract the states of TZMGs and examine the performance. State abstraction reduces the number of states by treating multiple different states as a single state, and there is a substantial body of research on finding optimal policies for Markov decision processes using state abstraction. This study extends the state abstraction for MDPs to Markov games. In this case, the game with state abstraction may yield different equilibrium solutions from those of the ground game. To evaluate the equilibrium solutions of the game with state abstraction, we derived bounds on duality gap, which represents the distance from the equilibrium solutions of the ground game. Finally, we demonstrate our state abstraction with Markov Soccer, compute equilibrium policies, and examine the results.

Downloads

Published

2025-04-11

How to Cite

Ishibashi, H., Abe, K., & Iwasaki, A. (2025). Approximate State Abstraction for Markov Games. Proceedings of the AAAI Conference on Artificial Intelligence, 39(17), 17555–17563. https://doi.org/10.1609/aaai.v39i17.33930

Issue

Section

AAAI Technical Track on Machine Learning III