Symbolic Music Generation with Transformer-GANs

Authors

  • Aashiq Muhamed Amazon Web Services
  • Liang Li Amazon Web Services
  • Xingjian Shi Amazon Web Services
  • Suri Yaddanapudi Amazon Web Services
  • Wayne Chi Amazon Web Services
  • Dylan Jackson Amazon Web Services
  • Rahul Suresh Amazon Web Services
  • Zachary C. Lipton Carnegie Mellon University
  • Alex J. Smola Amazon Web Services

Keywords:

Art/Music/Creativity, Adversarial Learning & Robustness, Neural Generative Models & Autoencoders, (Deep) Neural Network Algorithms

Abstract

Autoregressive models using Transformers have emerged as the dominant approach for music generation with the goal of synthesizing minute-long compositions that exhibit large-scale musical structure. These models are commonly trained by minimizing the negative log-likelihood (NLL) of the observed sequence in an autoregressive manner. Unfortunately, the quality of samples from these models tends to degrade significantly for long sequences, a phenomenon attributed to exposure bias. Fortunately, we are able to detect these failures with classifiers trained to distinguish between real and sampled sequences, an observation that motivates our exploration of adversarial losses to complement the NLL objective. We use a pre-trained Span-BERT model for the discriminator of the GAN, which in our experiments helped with training stability. We use the Gumbel-Softmax trick to obtain a differentiable approximation of the sampling process. This makes discrete sequences amenable to optimization in GANs. In addition, we break the sequences into smaller chunks to ensure that we stay within a given memory budget. We demonstrate via human evaluations and a new discriminative metric that the music generated by our approach outperforms a baseline trained with likelihood maximization, the state-of-the-art Music Transformer, and other GANs used for sequence generation. 57% of people prefer music generated via our approach while 43% prefer Music Transformer.

Downloads

Published

2021-05-18

How to Cite

Muhamed, A., Li, L., Shi, X., Yaddanapudi, S., Chi, W., Jackson, D., Suresh, R., Lipton, Z. C., & Smola, A. J. (2021). Symbolic Music Generation with Transformer-GANs. Proceedings of the AAAI Conference on Artificial Intelligence, 35(1), 408-417. Retrieved from https://ojs.aaai.org/index.php/AAAI/article/view/16117

Issue

Section

AAAI Technical Track on Application Domains