UniSyn: An End-to-End Unified Model for Text-to-Speech and Singing Voice Synthesis
DOI:
https://doi.org/10.1609/aaai.v37i11.26530Keywords:
SNLP: Speech and Multimodality, SNLP: GenerationAbstract
Text-to-speech (TTS) and singing voice synthesis (SVS) aim at generating high-quality speaking and singing voice according to textual input and music scores, respectively. Unifying TTS and SVS into a single system is crucial to the applications requiring both of them. Existing methods usually suffer from some limitations, which rely on either both singing and speaking data from the same person or cascaded models of multiple tasks. To address these problems, a simplified elegant framework for TTS and SVS, named UniSyn, is proposed in this paper. It is an end-to-end unified model that can make a voice speak and sing with only singing or speaking data from this person. To be specific, a multi-conditional variational autoencoder (MC-VAE), which constructs two independent latent sub-spaces with the speaker- and style-related (i.e. speak or sing) conditions for flexible control, is proposed in UniSyn. Moreover, supervised guided-VAE and timbre perturbation with the Wasserstein distance constraint are leveraged to further disentangle the speaker timbre and style. Experiments conducted on two speakers and two singers demonstrate that UniSyn can generate natural speaking and singing voice without corresponding training data. The proposed approach outperforms the state-of-the-art end-to-end voice generation work, which proves the effectiveness and advantages of UniSyn.Downloads
Published
2023-06-26
How to Cite
Lei, Y., Yang, S., Wang, X., Xie, Q., Yao, J., Xie, L., & Su, D. (2023). UniSyn: An End-to-End Unified Model for Text-to-Speech and Singing Voice Synthesis. Proceedings of the AAAI Conference on Artificial Intelligence, 37(11), 13025-13033. https://doi.org/10.1609/aaai.v37i11.26530
Issue
Section
AAAI Technical Track on Speech & Natural Language Processing