Extracting Learned Discard and Knocking Strategies from a Gin Rummy Bot

Authors

  • Benjamin Goldstein The Pennsylvania State University
  • Jean-Pierre Astudillo Guerra The Pennsylvania State University
  • Emily Haigh The Pennsylvania State University
  • Bryan Cruz Ulloa The Pennsylvania State University
  • Jeremy Blum The Pennsylvania State University

DOI:

https://doi.org/10.1609/aaai.v35i17.17827

Keywords:

Counterfactual Regret Minimization, Monte Carlo Counterfactual Regret Minimization, Gin Rummy, Game Theory, Nash Equilibrium, Imperfect Information Games

Abstract

Various Gin Rummy strategy guides provide heuristics for human players to improve their gameplay. Often these heuristics are either conflicting or contain ambiguity that limits their applicability, especially for discard and end-of-game decisions. This paper describes an approach to analyzing the machine learning capabilities of a Gin Rummy agent to help resolve these conflicts and ambiguities. There are three main decision points in the game: when to draw from the discard pile, which card to discard from the player's hand, and when to knock. The agent us-es a learning approach to estimate the expected utility for discards. An analysis of these utility values provides in-sight into resolving ambiguities in tips for discard decisions in human play. The agent’s end-of-game, or knocking, strategy was derived using Monte Carlo Counterfactual regret minimization (MCCFR). This approach was applied to estimate Nash equilibrium knocking strategies under different rules of the game. The analysis suggests that conflicts in the end-of-game playing tips are due in part to different rules used in common Gin Rummy variants.

Downloads

Published

2021-05-18

How to Cite

Goldstein, B., Astudillo Guerra, J.-P., Haigh, E., Cruz Ulloa, B., & Blum, J. (2021). Extracting Learned Discard and Knocking Strategies from a Gin Rummy Bot. Proceedings of the AAAI Conference on Artificial Intelligence, 35(17), 15518-15525. https://doi.org/10.1609/aaai.v35i17.17827