Robust Video Portrait Reenactment via Personalized Representation Quantization

Authors

  • Kaisiyuan Wang The University of Sydney
  • Changcheng Liang Xidian University
  • Hang Zhou Baidu Inc.
  • Jiaxiang Tang Peking University
  • Qianyi Wu Monash University
  • Dongliang He Baidu Inc.
  • Zhibin Hong Baidu Inc.
  • Jingtuo Liu Baidu Inc.
  • Errui Ding Baidu Inc.
  • Ziwei Liu Nanyang Technological University
  • Jingdong Wang Baidu Inc.

DOI:

https://doi.org/10.1609/aaai.v37i2.25354

Keywords:

CV: Computational Photography, Image & Video Synthesis

Abstract

While progress has been made in the field of portrait reenactment, the problem of how to produce high-fidelity and robust videos remains. Recent studies normally find it challenging to handle rarely seen target poses due to the limitation of source data. This paper proposes the Video Portrait via Non-local Quantization Modeling (VPNQ) framework, which produces pose- and disturbance-robust reenactable video portraits. Our key insight is to learn position-invariant quantized local patch representations and build a mapping between simple driving signals and local textures with non-local spatial-temporal modeling. Specifically, instead of learning a universal quantized codebook, we identify that a personalized one can be trained to preserve desired position-invariant local details better. Then, a simple representation of projected landmarks can be used as sufficient driving signals to avoid 3D rendering. Following, we employ a carefully designed Spatio-Temporal Transformer to predict reasonable and temporally consistent quantized tokens from the driving signal. The predicted codes can be decoded back to robust and high-quality videos. Comprehensive experiments have been conducted to validate the effectiveness of our approach.

Downloads

Published

2023-06-26

How to Cite

Wang, K., Liang, C., Zhou, H., Tang, J., Wu, Q., He, D., Hong, Z., Liu, J., Ding, E., Liu, Z., & Wang, J. (2023). Robust Video Portrait Reenactment via Personalized Representation Quantization. Proceedings of the AAAI Conference on Artificial Intelligence, 37(2), 2564-2572. https://doi.org/10.1609/aaai.v37i2.25354

Issue

Section

AAAI Technical Track on Computer Vision II