Parallel and High-Fidelity Text-to-Lip Generation

Authors

  • Jinglin Liu Zhejiang University
  • Zhiying Zhu Zhejiang University
  • Yi Ren Zhejiang University
  • Wencan Huang Zhejiang University
  • Baoxing Huai Huawei Cloud
  • Nicholas Yuan Huawei Cloud
  • Zhou Zhao Zhejiang University

DOI:

https://doi.org/10.1609/aaai.v36i2.20066

Keywords:

Computer Vision (CV)

Abstract

As a key component of talking face generation, lip movements generation determines the naturalness and coherence of the generated talking face video. Prior literature mainly focuses on speech-to-lip generation while there is a paucity in text-to-lip (T2L) generation. T2L is a challenging task and existing end-to-end works depend on the attention mechanism and autoregressive (AR) decoding manner. However, the AR decoding manner generates current lip frame conditioned on frames generated previously, which inherently hinders the inference speed, and also has a detrimental effect on the quality of generated lip frames due to error propagation. This encourages the research of parallel T2L generation. In this work, we propose a parallel decoding model for fast and high-fidelity text-to-lip generation (ParaLip). Specifically, we predict the duration of the encoded linguistic features and model the target lip frames conditioned on the encoded linguistic features with their duration in a non-autoregressive manner. Furthermore, we incorporate the structural similarity index loss and adversarial learning to improve perceptual quality of generated lip frames and alleviate the blurry prediction problem. Extensive experiments conducted on GRID and TCD-TIMIT datasets demonstrate the superiority of proposed methods.

Downloads

Published

2022-06-28

How to Cite

Liu, J., Zhu, Z., Ren, Y., Huang, W., Huai, B., Yuan, N., & Zhao, Z. (2022). Parallel and High-Fidelity Text-to-Lip Generation. Proceedings of the AAAI Conference on Artificial Intelligence, 36(2), 1738-1746. https://doi.org/10.1609/aaai.v36i2.20066

Issue

Section

AAAI Technical Track on Computer Vision II