Learning Adaptive Game Soundtrack Control

Authors

  • Aaron Dorsey Gettysburg College
  • Todd W. Neller Gettysburg College
  • Hien G. Tran Gettysburg College
  • Veysel Yilmaz Gettysburg College

DOI:

https://doi.org/10.1609/aaai.v37i13.26909

Keywords:

Artificial Intelligence, Game Design, Human-Aware AI In Sound And Music, Machine Learning, Supervised Learning, Music, Soundtrack

Abstract

In this paper, we demonstrate a novel technique for dynamically generating an emotionally-directed video game soundtrack. We begin with a human Conductor observing gameplay and directing associated emotions that would enhance the observed gameplay experience. We apply supervised learning to data sampled from synchronized input gameplay features and Conductor output emotional direction features in order to fit a mathematical model to the Conductor's emotional direction. Then, during gameplay, the emotional direction model maps gameplay state input to emotional direction output, which is then input to a music generation module that dynamically generates emotionally-relevant music during gameplay. Our empirical study suggests that random forests serve well for modeling the Conductor for our two experimental game genres.

Downloads

Published

2024-07-15

How to Cite

Dorsey, A., Neller, T. W., Tran, H. G., & Yilmaz, V. (2024). Learning Adaptive Game Soundtrack Control. Proceedings of the AAAI Conference on Artificial Intelligence, 37(13), 16070-16077. https://doi.org/10.1609/aaai.v37i13.26909

Issue

Section

EAAI Symposium: Human-Aware AI in Sound and Music