What Does Your Face Sound Like? 3D Face Shape towards Voice
DOI:
https://doi.org/10.1609/aaai.v37i11.26628Keywords:
SNLP: Speech and Multimodality, SNLP: GenerationAbstract
Face-based speech synthesis provides a practical solution to generate voices from human faces. However, directly using 2D face images leads to the problems of uninterpretability and entanglement. In this paper, to address the issues, we introduce 3D face shape which (1) has an anatomical relationship between voice characteristics, partaking in the "bone conduction" of human timbre production, and (2) is naturally independent of irrelevant factors by excluding the blending process. We devise a three-stage framework to generate speech from 3D face shapes. Fully considering timbre production in anatomical and acquired terms, our framework incorporates three additional relevant attributes including face texture, facial features, and demographics. Experiments and subjective tests demonstrate our method can generate utterances matching faces well, with good audio quality and voice diversity. We also explore and visualize how the voice changes with the face. Case studies show that our method upgrades the face-voice inference to personalized custom-made voice creating, revealing a promising prospect in virtual human and dubbing applications.Downloads
Published
2023-06-26
How to Cite
Yang, Z., Wu, Z., Shan, Y., & Jia, J. (2023). What Does Your Face Sound Like? 3D Face Shape towards Voice. Proceedings of the AAAI Conference on Artificial Intelligence, 37(11), 13905-13913. https://doi.org/10.1609/aaai.v37i11.26628
Issue
Section
AAAI Technical Track on Speech & Natural Language Processing