Realistic Face Reenactment via Self-Supervised Disentangling of Identity and Pose

Authors

  • Xianfang Zeng Zhejiang University
  • Yusu Pan Zhejiang University
  • Mengmeng Wang Zhejiang University
  • Jiangning Zhang Zhejiang University
  • Yong Liu Zhejiang University

DOI:

https://doi.org/10.1609/aaai.v34i07.6970

Abstract

Recent works have shown how realistic talking face images can be obtained under the supervision of geometry guidance, e.g., facial landmark or boundary. To alleviate the demand for manual annotations, in this paper, we propose a novel self-supervised hybrid model (DAE-GAN) that learns how to reenact face naturally given large amounts of unlabeled videos. Our approach combines two deforming autoencoders with the latest advances in the conditional generation. On the one hand, we adopt the deforming autoencoder to disentangle identity and pose representations. A strong prior in talking face videos is that each frame can be encoded as two parts: one for video-specific identity and the other for various poses. Inspired by that, we utilize a multi-frame deforming autoencoder to learn a pose-invariant embedded face for each video. Meanwhile, a multi-scale deforming autoencoder is proposed to extract pose-related information for each frame. On the other hand, the conditional generator allows for enhancing fine details and overall reality. It leverages the disentangled features to generate photo-realistic and pose-alike face images. We evaluate our model on VoxCeleb1 and RaFD dataset. Experiment results demonstrate the superior quality of reenacted images and the flexibility of transferring facial movements between identities.

Downloads

Published

2020-04-03

How to Cite

Zeng, X., Pan, Y., Wang, M., Zhang, J., & Liu, Y. (2020). Realistic Face Reenactment via Self-Supervised Disentangling of Identity and Pose. Proceedings of the AAAI Conference on Artificial Intelligence, 34(07), 12757-12764. https://doi.org/10.1609/aaai.v34i07.6970

Issue

Section

AAAI Technical Track: Vision