Automated Play-Testing through RL Based Human-Like Play-Styles Generation

Authors

  • Pierre Le Pelletier de Woillemont Ubisoft Sorbonne Université, CNRS, LIP6
  • Rémi Labory Ubisoft
  • Vincent Corruble LIP6, Universite Pierre et Marie Curie (Paris 6)

DOI:

https://doi.org/10.1609/aiide.v18i1.21958

Keywords:

Reinforcement Learning, Conditional Policy, Play-Style Encoding, Automated Game Testing, Game Testing, Players Clustering

Abstract

The increasing complexity of gameplay mechanisms in modern video games is leading to the emergence of a wider range of ways to play games. The variety of possible play-styles needs to be anticipated and taken into account by designers, through automated tests. Reinforcement Learning (RL) is a promising answer to the need of automating video game testing. To that effect one needs to train an agent to play the game, while ensuring this agent will generate the same play-styles as the players in order to give meaningful feedback to the designers. We present CARMI : a Configurable Agent with Relative Metrics as Input. An agent able to emulate the players play-styles, even on previously unseen levels. Unlike current methods it does not rely on having full trajectories, but only summary data. Moreover it only requires little human data, thus compatible with the constraints of modern video game production. This novel agent could be used to investigate behaviors and balancing during the production of a video game with a realistic amount of training time.

Downloads

Published

2022-10-11

How to Cite

Le Pelletier de Woillemont, P., Labory, R., & Corruble, V. (2022). Automated Play-Testing through RL Based Human-Like Play-Styles Generation. Proceedings of the AAAI Conference on Artificial Intelligence and Interactive Digital Entertainment, 18(1), 146-154. https://doi.org/10.1609/aiide.v18i1.21958