CSSinger: End-to-End Chunkwise Streaming Singing Voice Synthesis System Based on Conditional Variational Autoencoder

Authors

  • Jianwei Cui University of Science and Technology of China Tencent AI Lab
  • Yu Gu Tencent AI Lab
  • Shihao Chen University of Science and Technology of China
  • Jie Zhang University of Science and Technology of China
  • Liping Chen University of Science and Technology of China
  • Lirong Dai University of Science and Technology of China

DOI:

https://doi.org/10.1609/aaai.v39i22.34541

Abstract

Singing Voice Synthesis (SVS) aims to generate singing voices of high fidelity and expressiveness. Conventional SVS systems usually utilize an acoustic model to transform a music score into acoustic features, followed by a vocoder to reconstruct the singing voice. It was recently shown that end-to-end modeling is effective in the fields of SVS and Text to Speech (TTS). In this work, we thus present a fully end-to-end SVS method together with a chunkwise streaming inference to address the latency issue for practical usages. Note that this is the first attempt to fully implement end-to-end streaming audio synthesis using latent representations in VAE. We have made specific improvements to enhance the performance of streaming SVS using latent representations. Experimental results demonstrate that the proposed method achieves synthesized audio with high expressiveness and pitch accuracy in both streaming SVS and TTS tasks.

Downloads

Published

2025-04-11

How to Cite

Cui, J., Gu, Y., Chen, S., Zhang, J., Chen, L., & Dai, L. (2025). CSSinger: End-to-End Chunkwise Streaming Singing Voice Synthesis System Based on Conditional Variational Autoencoder. Proceedings of the AAAI Conference on Artificial Intelligence, 39(22), 23704-23714. https://doi.org/10.1609/aaai.v39i22.34541

Issue

Section

AAAI Technical Track on Natural Language Processing I