Taming Diffusion Models for Music-Driven Conducting Motion Generation

Authors

  • Zhuoran Zhao National University of Singapore
  • Jinbin Bai National University of Singapore
  • Delong Chen Xiaobing.ai
  • Debang Wang National University of Singapore
  • Yubo Pan National University of Singapore

DOI:

https://doi.org/10.1609/aaaiss.v1i1.27474

Keywords:

Music-driven Motion Generation, Diffusion Models, Controllable Content Creation

Abstract

Generating the motion of orchestral conductors from a given piece of symphony music is a challenging task since it requires a model to learn semantic music features and capture the underlying distribution of real conducting motion. Prior works have applied Generative Adversarial Networks (GAN) to this task, but the promising diffusion model, which recently showed its advantages in terms of both training stability and output quality, has not been exploited in this context. This paper presents Diffusion-Conductor, a novel DDIM-based approach for music-driven conducting motion generation, which integrates the diffusion model to a two-stage learning framework. We further propose a random masking strategy to improve the feature robustness, and use a pair of geometric loss functions to impose additional regularizations and increase motion diversity. We also design several novel metrics, including Frechet Gesture Distance (FGD) and Beat Consistency Score (BC) for a more comprehensive evaluation of the generated motion. Experimental results demonstrate the advantages of our model. The code is released at https://github.com/viiika/Diffusion-Conductor.

Downloads

Published

2023-10-03