Local Consistency Guidance: Personalized Stylization Method of Face Video (Student Abstract)

Authors

  • Wancheng Feng Shandong University of Science and Technology
  • Yingchao Liu Shandong University of Science and Technology
  • Jiaming Pei University of Sydney
  • Wenxuan Liu Shandong University of Science and Technology
  • Chunpeng Tian Shandong University of Science and Technology
  • Lukun Wang Shandong University of Science and Technology

DOI:

https://doi.org/10.1609/aaai.v38i21.30440

Keywords:

Face Video Stylization, Personalized Diffusion Model, Local Consistency Guidance

Abstract

Face video stylization aims to convert real face videos into specified reference styles. While one-shot methods perform well in single-image stylization, ensuring continuity between frames and retaining the original facial expressions present challenges in video stylization. To address these issues, our approach employs a personalized diffusion model with pixel-level control. We propose Local Consistency Guidance(LCG) strategy, composed of local-cross attention and local style transfer, to ensure temporal consistency. This framework enables the synthesis of high-quality stylized face videos with excellent temporal continuity.

Published

2024-03-24

How to Cite

Feng, W., Liu, Y., Pei, J., Liu, W., Tian, C., & Wang, L. (2024). Local Consistency Guidance: Personalized Stylization Method of Face Video (Student Abstract). Proceedings of the AAAI Conference on Artificial Intelligence, 38(21), 23486-23487. https://doi.org/10.1609/aaai.v38i21.30440