Music-to-Facial Expressions: Emotion-Based Music Visualization for the Hearing Impaired

Authors

  • Yubo Wang Washington University in St.Louis
  • Fengzhou Pan Washington University in St.Louis
  • Danni Liu Washington University in St. Louis
  • Jiaxiong Hu Tsinghua University

DOI:

https://doi.org/10.1609/aaai.v37i13.26912

Keywords:

Music Visualization, Accessibility, Human Factors, Emotion

Abstract

While music is made to convey messages and emotions, auditory music is not equally accessible to everyone. Music visualization is a common approach to augment the listening experiences of the hearing users and to provide music experiences for the hearing-impaired. In this paper, we present a music visualization system that can turn the input of a piece of music into a series of facial expressions representative of the continuously changing sentiments in the music. The resulting facial expressions, recorded as action units, can later animate a static virtual avatar to be emotive synchronously with the music.

Downloads

Published

2024-07-15

How to Cite

Wang, Y., Pan, F., Liu, D., & Hu, J. (2024). Music-to-Facial Expressions: Emotion-Based Music Visualization for the Hearing Impaired. Proceedings of the AAAI Conference on Artificial Intelligence, 37(13), 16096-16102. https://doi.org/10.1609/aaai.v37i13.26912

Issue

Section

EAAI Symposium: Human-Aware AI in Sound and Music