PointTalk: Audio-Driven Dynamic Lip Point Cloud for 3D Gaussian-based Talking Head Synthesis

Authors

  • Yifan Xie Guangdong Laboratory of Artificial Intelligence and Digital Economy (SZ) Xi'an Jiaotong University
  • Tao Feng Guangdong Laboratory of Artificial Intelligence and Digital Economy (SZ)
  • Xin Zhang Guangdong Laboratory of Artificial Intelligence and Digital Economy (SZ)
  • Xiangyang Luo Guangdong Laboratory of Artificial Intelligence and Digital Economy (SZ)
  • Zixuan Guo Peking University
  • Weijiang Yu Sun Yat-sen University
  • Heng Chang Tsinghua University
  • Fei Ma Guangdong Laboratory of Artificial Intelligence and Digital Economy (SZ)
  • Fei Richard Yu Shenzhen University Carleton University

DOI:

https://doi.org/10.1609/aaai.v39i8.32946

Abstract

Talking head synthesis with arbitrary speech audio is a crucial challenge in the field of digital humans. Recently, methods based on radiance fields have received increasing attention due to their ability to synthesize high-fidelity and identity-consistent talking heads from just a few minutes of training video. However, due to the limited scale of the training data, these methods often exhibit poor performance in audio-lip synchronization and visual quality. In this paper, we propose a novel 3D Gaussian-based method called PointTalk, which constructs a static 3D Gaussian field of the head and deforms it in sync with the audio. It also incorporates an audio-driven dynamic lip point cloud as a critical component of the conditional information, thereby facilitating the effective synthesis of talking heads. Specifically, the initial step involves generating the corresponding lip point cloud from the audio signal and capturing its topological structure. The design of the dynamic difference encoder aims to capture the subtle nuances inherent in dynamic lip movements more effectively. Furthermore, we integrate the audio-point enhancement module, which not only ensures the synchronization of the audio signal with the corresponding lip point cloud within the feature space, but also facilitates a deeper understanding of the interrelations among cross-modal conditional features. Extensive experiments demonstrate that our method achieves superior high-fidelity and audio-lip synchronization in talking head synthesis compared to previous methods.

Downloads

Published

2025-04-11

How to Cite

Xie, Y., Feng, T., Zhang, X., Luo, X., Guo, Z., Yu, W., … Yu, F. R. (2025). PointTalk: Audio-Driven Dynamic Lip Point Cloud for 3D Gaussian-based Talking Head Synthesis. Proceedings of the AAAI Conference on Artificial Intelligence, 39(8), 8753–8761. https://doi.org/10.1609/aaai.v39i8.32946

Issue

Section

AAAI Technical Track on Computer Vision VII