SyncTalkFace: Talking Face Generation with Precise Lip-Syncing via Audio-Lip Memory
Keywords:Computer Vision (CV)
AbstractThe challenge of talking face generation from speech lies in aligning two different modal information, audio and video, such that the mouth region corresponds to input audio. Previous methods either exploit audio-visual representation learning or leverage intermediate structural information such as landmarks and 3D models. However, they struggle to synthesize fine details of the lips varying at the phoneme level as they do not sufficiently provide visual information of the lips at the video synthesis step. To overcome this limitation, our work proposes Audio-Lip Memory that brings in visual information of the mouth region corresponding to input audio and enforces fine-grained audio-visual coherence. It stores lip motion features from sequential ground truth images in the value memory and aligns them with corresponding audio features so that they can be retrieved using audio input at inference time. Therefore, using the retrieved lip motion features as visual hints, it can easily correlate audio with visual dynamics in the synthesis step. By analyzing the memory, we demonstrate that unique lip features are stored in each memory slot at the phoneme level, capturing subtle lip motion based on memory addressing. In addition, we introduce visual-visual synchronization loss which can enhance lip-syncing performance when used along with audio-visual synchronization loss in our model. Extensive experiments are performed to verify that our method generates high-quality video with mouth shapes that best align with the input audio, outperforming previous state-of-the-art methods.
How to Cite
Park, S. J., Kim, M., Hong, J., Choi, J., & Ro, Y. M. (2022). SyncTalkFace: Talking Face Generation with Precise Lip-Syncing via Audio-Lip Memory. Proceedings of the AAAI Conference on Artificial Intelligence, 36(2), 2062-2070. https://doi.org/10.1609/aaai.v36i2.20102
AAAI Technical Track on Computer Vision II