MoMusic: A Motion-Driven Human-AI Collaborative Music Composition and Performing System
Keywords:Music Generation, Motion Detection, Voice Conversion, Human-Computer Interaction
AbstractThe significant development of artificial neural network architectures has facilitated the increasing adoption of automated music composition models over the past few years. However, most existing systems feature algorithmic generative structures based on hard code and predefined rules, generally excluding interactive or improvised behaviors. We propose a motion based music system, MoMusic, as a AI real time music generation system. MoMusic features a partially randomized harmonic sequencing model based on a probabilistic analysis of tonal chord progressions, mathematically abstracted through musical set theory. This model is presented against a dual dimension grid that produces resulting sounds through a posture recognition mechanism. A camera captures the users' fingers' movement and trajectories, creating coherent, partially improvised harmonic progressions. MoMusic integrates several timbrical registers, from traditional classical instruments such as the piano to a new ''human voice instrument'' created using a voice conversion technique. Our research demonstrates MoMusic's interactiveness, ability to inspire musicians, and ability to generate coherent musical material with various timbrical registers. MoMusic's capabilities could be easily expanded to incorporate different forms of posture controlled timbrical transformation, rhythmic transformation, dynamic transformation, or even digital sound processing techniques.
How to Cite
Bian, W., Song, Y., Gu, N., Chan, T. Y., Lo, T. T., Li, T. S., Wong, K. C., Xue, W., & Alonso Trillo, R. (2023). MoMusic: A Motion-Driven Human-AI Collaborative Music Composition and Performing System. Proceedings of the AAAI Conference on Artificial Intelligence, 37(13), 16057-16062. https://doi.org/10.1609/aaai.v37i13.26907
EAAI Symposium: Human-Aware AI in Sound and Music