MusER: Musical Element-Based Regularization for Generating Symbolic Music with Emotion

Authors

  • Shulei Ji Xi'an Jiaotong University
  • Xinyu Yang Xi'an Jiaotong University

DOI:

https://doi.org/10.1609/aaai.v38i11.29178

Keywords:

ML: Deep Generative Models & Autoencoders, APP: Other Applications, ML: Representation Learning

Abstract

Generating music with emotion is an important task in automatic music generation, in which emotion is evoked through a variety of musical elements (such as pitch and duration) that change over time and collaborate with each other. However, prior research on deep learning-based emotional music generation has rarely explored the contribution of different musical elements to emotions, let alone the deliberate manipulation of these elements to alter the emotion of music, which is not conducive to fine-grained element-level control over emotions. To address this gap, we present a novel approach employing musical element-based regularization in the latent space to disentangle distinct elements, investigate their roles in distinguishing emotions, and further manipulate elements to alter musical emotions. Specifically, we propose a novel VQ-VAE-based model named MusER. MusER incorporates a regularization loss to enforce the correspondence between the musical element sequences and the specific dimensions of latent variable sequences, providing a new solution for disentangling discrete sequences. Taking advantage of the disentangled latent vectors, a two-level decoding strategy that includes multiple decoders attending to latent vectors with different semantics is devised to better predict the elements. By visualizing latent space, we conclude that MusER yields a disentangled and interpretable latent space and gain insights into the contribution of distinct elements to the emotional dimensions (i.e., arousal and valence). Experimental results demonstrate that MusER outperforms the state-of-the-art models for generating emotional music in both objective and subjective evaluation. Besides, we rearrange music through element transfer and attempt to alter the emotion of music by transferring emotion-distinguishable elements.

Published

2024-03-24

How to Cite

Ji, S., & Yang, X. (2024). MusER: Musical Element-Based Regularization for Generating Symbolic Music with Emotion. Proceedings of the AAAI Conference on Artificial Intelligence, 38(11), 12821-12829. https://doi.org/10.1609/aaai.v38i11.29178

Issue

Section

AAAI Technical Track on Machine Learning II