A Vector Quantized Approach for Text to Speech Synthesis on Real-World Spontaneous Speech

Authors

  • Li-Wei Chen Carnegie Mellon University
  • Shinji Watanabe Carnegie Mellon University
  • Alexander Rudnicky Carnegie Mellon University

DOI:

https://doi.org/10.1609/aaai.v37i11.26488

Keywords:

SNLP: Generation, SNLP: Speech and Multimodality, ML: Deep Neural Architectures, SNLP: Applications, ML: Applications

Abstract

Recent Text-to-Speech (TTS) systems trained on reading or acted corpora have achieved near human-level naturalness. The diversity of human speech, however, often goes beyond the coverage of these corpora. We believe the ability to handle such diversity is crucial for AI systems to achieve human-level communication. Our work explores the use of more abundant real-world data for building speech synthesizers. We train TTS systems using real-world speech from YouTube and podcasts. We observe the mismatch between training and inference alignments in mel-spectrogram based autoregressive models, leading to unintelligible synthesis, and demonstrate that learned discrete codes within multiple code groups effectively resolves this issue. We introduce our MQTTS system whose architecture is designed for multiple code generation and monotonic alignment, along with the use of a clean silence prompt to improve synthesis quality. We conduct ablation analyses to identify the efficacy of our methods. We show that MQTTS outperforms existing TTS systems in several objective and subjective measures.

Downloads

Published

2023-06-26

How to Cite

Chen, L.-W., Watanabe, S., & Rudnicky, A. (2023). A Vector Quantized Approach for Text to Speech Synthesis on Real-World Spontaneous Speech. Proceedings of the AAAI Conference on Artificial Intelligence, 37(11), 12644-12652. https://doi.org/10.1609/aaai.v37i11.26488

Issue

Section

AAAI Technical Track on Speech & Natural Language Processing