Balancing Safety and Exploitability in Opponent Modeling

Authors

  • Zhikun Wang Max Planck Institute for Intelligent Systems
  • Abdeslam Boularias Max Planck Institute for Intelligent Systems
  • Katharina Mülling Max Planck Institute for Intelligent Systems
  • Jan Peters Max Planck Institute for Intelligent Systems

DOI:

https://doi.org/10.1609/aaai.v25i1.7981

Abstract

Opponent modeling is a critical mechanism in repeated games. It allows a player to adapt its strategy in order to better respond to the presumed preferences of his opponents. We introduce a new modeling technique that adaptively balances exploitability and risk reduction. An opponent’s strategy is modeled with a set of possible strategies that contain the actual strategy with a high probability. The algorithm is safe as the expected payoff is above the minimax payoff with a high probability, and can exploit the opponents’ preferences when sufficient observations have been obtained. We apply them to normal-form games and stochastic games with a finite number of stages. The performance of the proposed approach is first demonstrated on repeated rock-paper-scissors games. Subsequently, the approach is evaluated in a human-robot table-tennis setting where the robot player learns to prepare to return a served ball. By modeling the human players, the robot chooses a forehand, backhand or middle preparation pose before they serve. The learned strategies can exploit the opponent’s preferences, leading to a higher rate of successful returns.

Downloads

Published

2011-08-04

How to Cite

Wang, Z., Boularias, A., Mülling, K., & Peters, J. (2011). Balancing Safety and Exploitability in Opponent Modeling. Proceedings of the AAAI Conference on Artificial Intelligence, 25(1), 1515-1520. https://doi.org/10.1609/aaai.v25i1.7981