StyleTalk: One-Shot Talking Head Generation with Controllable Speaking Styles

Authors

  • Yifeng Ma Department of Computer Science and Technology, BNRist, THUAI, State Key Laboratory of Intelligent Technology and Systems, Tsinghua University
  • Suzhen Wang Virtual Human Group, Netease Fuxi AI Lab
  • Zhipeng Hu Virtual Human Group, Netease Fuxi AI Lab Zhejiang University
  • Changjie Fan Virtual Human Group, Netease Fuxi AI Lab
  • Tangjie Lv Virtual Human Group, Netease Fuxi AI Lab
  • Yu Ding Virtual Human Group, Netease Fuxi AI Lab Zhejiang University
  • Zhidong Deng Department of Computer Science and Technology, BNRist, THUAI, State Key Laboratory of Intelligent Technology and Systems, Tsinghua University
  • Xin Yu University of Technology Sydney

DOI:

https://doi.org/10.1609/aaai.v37i2.25280

Keywords:

CV: Computational Photography, Image & Video Synthesis, CV: Biometrics, Face, Gesture & Pose, CV: Language and Vision, CV: Multi-modal Vision

Abstract

Different people speak with diverse personalized speaking styles. Although existing one-shot talking head methods have made significant progress in lip sync, natural facial expressions, and stable head motions, they still cannot generate diverse speaking styles in the final talking head videos. To tackle this problem, we propose a one-shot style-controllable talking face generation framework. In a nutshell, we aim to attain a speaking style from an arbitrary reference speaking video and then drive the one-shot portrait to speak with the reference speaking style and another piece of audio. Specifically, we first develop a style encoder to extract dynamic facial motion patterns of a style reference video and then encode them into a style code. Afterward, we introduce a style-controllable decoder to synthesize stylized facial animations from the speech content and style code. In order to integrate the reference speaking style into generated videos, we design a style-aware adaptive transformer, which enables the encoded style code to adjust the weights of the feed-forward layers accordingly. Thanks to the style-aware adaptation mechanism, the reference speaking style can be better embedded into synthesized videos during decoding. Extensive experiments demonstrate that our method is capable of generating talking head videos with diverse speaking styles from only one portrait image and an audio clip while achieving authentic visual effects. Project Page: https://github.com/FuxiVirtualHuman/styletalk.

Downloads

Published

2023-06-26

How to Cite

Ma, Y., Wang, S., Hu, Z., Fan, C., Lv, T., Ding, Y., Deng, Z., & Yu, X. (2023). StyleTalk: One-Shot Talking Head Generation with Controllable Speaking Styles. Proceedings of the AAAI Conference on Artificial Intelligence, 37(2), 1896-1904. https://doi.org/10.1609/aaai.v37i2.25280

Issue

Section

AAAI Technical Track on Computer Vision II