Model and Reinforcement Learning for Markov Games with Risk Preferences

Authors

  • Wenjie Huang Shenzhen Research Institute of Big Data
  • Viet Hai Pham National University of Singapore
  • William Benjamin Haskell Purdue University

DOI:

https://doi.org/10.1609/aaai.v34i02.5574

Abstract

We motivate and propose a new model for non-cooperative Markov game which considers the interactions of risk-aware players. This model characterizes the time-consistent dynamic “risk” from both stochastic state transitions (inherent to the game) and randomized mixed strategies (due to all other players). An appropriate risk-aware equilibrium concept is proposed and the existence of such equilibria is demonstrated in stationary strategies by an application of Kakutani's fixed point theorem. We further propose a simulation-based Q-learning type algorithm for risk-aware equilibrium computation. This algorithm works with a special form of minimax risk measures which can naturally be written as saddle-point stochastic optimization problems, and covers many widely investigated risk measures. Finally, the almost sure convergence of this simulation-based algorithm to an equilibrium is demonstrated under some mild conditions. Our numerical experiments on a two player queuing game validate the properties of our model and algorithm, and demonstrate their worth and applicability in real life competitive decision-making.

Downloads

Published

2020-04-03

How to Cite

Huang, W., Pham, V. H., & Haskell, W. B. (2020). Model and Reinforcement Learning for Markov Games with Risk Preferences. Proceedings of the AAAI Conference on Artificial Intelligence, 34(02), 2022-2029. https://doi.org/10.1609/aaai.v34i02.5574

Issue

Section

AAAI Technical Track: Game Theory and Economic Paradigms