FlexComb: A Facial Landmark-Based Model for Expression Combination Generation

Authors

  • Bogdan Pikula University of Toronto
  • Steve Engels University of Toronto

DOI:

https://doi.org/10.1609/aiide.v19i1.27529

Keywords:

Facial Expression, Affective Computing, Emotion Generation, Human-computer Interaction, Deep Learning

Abstract

Facial expressions are a crucial but challenging aspect of animating in-game characters. They provide vital nonverbal communication cues, but given the high complexity and variability of human faces, the task of capturing the natural diversity and affective complexity of human faces can be a labour-intensive process for animators. This motivates the need for more accurate, realistic and lightweight methods for generating emotional expressions for in-game characters. In this work, we introduce FlexComb, a Facial Landmark-based Expression Combination model, designed to generate a real-time space of realistic facial expression combinations. FlexComb leverages the highly varied CelebV-HQ dataset containing emotions in the wild, and a transformer-based architecture. The central component of the FlexComb system is an emotion recognition model that is trained on the facial dataset, and used to generate a larger dataset of tagged faces. The resulting system generates in-game facial expressions by sampling from this tagged dataset, including expressions that combine emotions in specified amounts. This allows in-game characters to take on variety of realistic facial expressions for a single emotion, which addresses this primary challenge of facial emotion modeling. FlexComb shows potential for expressive facial emotion simulation with applications that include animation, video game development, virtual reality, and human-computer interaction.

Downloads

Published

2023-10-06

How to Cite

Pikula, B., & Engels, S. (2023). FlexComb: A Facial Landmark-Based Model for Expression Combination Generation. Proceedings of the AAAI Conference on Artificial Intelligence and Interactive Digital Entertainment, 19(1), 337-342. https://doi.org/10.1609/aiide.v19i1.27529