Style2Talker: High-Resolution Talking Head Generation with Emotion Style and Art Style

Authors

  • Shuai Tan Shanghai Jiao Tong University
  • Bin Ji Shanghai Jiao Tong University
  • Ye Pan Shanghai Jiaotong University

DOI:

https://doi.org/10.1609/aaai.v38i5.28313

Keywords:

CV: Multi-modal Vision, CV: Applications, CV: Computational Photography, Image & Video Synthesis

Abstract

Although automatically animating audio-driven talking heads has recently received growing interest, previous efforts have mainly concentrated on achieving lip synchronization with the audio, neglecting two crucial elements for generating expressive videos: emotion style and art style. In this paper, we present an innovative audio-driven talking face generation method called Style2Talker. It involves two stylized stages, namely Style-E and Style-A, which integrate text-controlled emotion style and picture-controlled art style into the final output. In order to prepare the scarce emotional text descriptions corresponding to the videos, we propose a labor-free paradigm that employs large-scale pretrained models to automatically annotate emotional text labels for existing audio-visual datasets. Incorporating the synthetic emotion texts, the Style-E stage utilizes a large-scale CLIP model to extract emotion representations, which are combined with the audio, serving as the condition for an efficient latent diffusion model designed to produce emotional motion coefficients of a 3DMM model. Moving on to the Style-A stage, we develop a coefficient-driven motion generator and an art-specific style path embedded in the well-known StyleGAN. This allows us to synthesize high-resolution artistically stylized talking head videos using the generated emotional motion coefficients and an art style source picture. Moreover, to better preserve image details and avoid artifacts, we provide StyleGAN with the multi-scale content features extracted from the identity image and refine its intermediate feature maps by the designed content encoder and refinement network, respectively. Extensive experimental results demonstrate our method outperforms existing state-of-the-art methods in terms of audio-lip synchronization and performance of both emotion style and art style.

Published

2024-03-24

How to Cite

Tan, S., Ji, B., & Pan, Y. (2024). Style2Talker: High-Resolution Talking Head Generation with Emotion Style and Art Style. Proceedings of the AAAI Conference on Artificial Intelligence, 38(5), 5079-5087. https://doi.org/10.1609/aaai.v38i5.28313

Issue

Section

AAAI Technical Track on Computer Vision IV