AnyTalk: Multi-modal Driven Multi-domain Talking Head Generation

Authors

  • Yu Wang International Digital Economy Academy (IDEA)
  • Yunfei Liu International Digital Economy Academy (IDEA)
  • Fa-Ting Hong International Digital Economy Academy (IDEA)
  • Meng Cao International Digital Economy Academy (IDEA)
  • Lijian Lin International Digital Economy Academy (IDEA)
  • Yu Li International Digital Economy Academy (IDEA)

DOI:

https://doi.org/10.1609/aaai.v39i8.32874

Abstract

Cross-domain talking head generation, such as animating a static cartoon animal photo with real human video, is crucial for personalized content creation. However, prior works typically rely on domain-specific frameworks and paired videos, limiting its utility and complicating its architecture with additional motion alignment modules. Addressing these shortcomings, we propose Anytalk, a unified framework that eliminates the need for paired data and learns a shared motion representation across different domains. The motion is represented by canonical 3D keypoints extracted using an unsupervised 3D keypoint detector. Further, we propose an expression consistency loss to improve the accuracy of facial dynamics in video generation. Additionally, we present AniTalk, a comprehensive dataset designed for advanced multi-modal cross-domain generation. Our experiments demonstrate that Anytalk excels at generating high-quality, multi-modal talking head videos, showcasing remarkable generalization capabilities across diverse domains.

Downloads

Published

2025-04-11

How to Cite

Wang, Y., Liu, Y., Hong, F.-T., Cao, M., Lin, L., & Li, Y. (2025). AnyTalk: Multi-modal Driven Multi-domain Talking Head Generation. Proceedings of the AAAI Conference on Artificial Intelligence, 39(8), 8105–8113. https://doi.org/10.1609/aaai.v39i8.32874

Issue

Section

AAAI Technical Track on Computer Vision VII