A Tool for Generating Controllable Variations of Musical Themes Using Variational Autoencoders with Latent Space Regularisation

Authors

  • Berker Banar Queen Mary University of London
  • Nick Bryan-Kinns Queen Mary University of London
  • Simon Colton Queen Mary University of London

DOI:

https://doi.org/10.1609/aaai.v37i13.27059

Keywords:

Controllable Music Generation, Variational Autoencoder, Latent Space Regularisation, Human-machine Co-creation

Abstract

A common musical composition practice is to develop musical pieces using variations of musical themes. In this study, we present an interactive tool which can generate variations of musical themes in real-time using a variational autoencoder model. Our tool is controllable using semantically meaningful musical attributes via latent space regularisation technique to increase the explainability of the model. The tool is integrated into an industry standard digital audio workstation - Ableton Live - using the Max4Live device framework and can run locally on an average personal CPU rather than requiring a costly GPU cluster. In this way we demonstrate how cutting-edge AI research can be integrated into the exiting workflows of professional and practising musicians for use in the real-world beyond the research lab.

Downloads

Published

2023-09-06

How to Cite

Banar, B., Bryan-Kinns, N., & Colton, S. (2023). A Tool for Generating Controllable Variations of Musical Themes Using Variational Autoencoders with Latent Space Regularisation. Proceedings of the AAAI Conference on Artificial Intelligence, 37(13), 16401-16403. https://doi.org/10.1609/aaai.v37i13.27059