Multi-SpectroGAN: High-Diversity and High-Fidelity Spectrogram Generation with Adversarial Style Combination for Speech Synthesis

Authors

  • Sang-Hoon Lee Korea University
  • Hyun-Wook Yoon Korea University
  • Hyeong-Rae Noh Korea University
  • Ji-Hoon Kim Korea University
  • Seong-Whan Lee Korea University

Keywords:

Speech Synthesis

Abstract

While generative adversarial networks (GANs) based neural text-to-speech (TTS) systems have shown significant improvement in neural speech synthesis, there is no TTS system to learn to synthesize speech from text sequences with only adversarial feedback. Because adversarial feedback alone is not sufficient to train the generator, current models still require the reconstruction loss compared with the ground-truth and the generated mel-spectrogram directly. In this paper, we present Multi-SpectroGAN (MSG), which can train the multi-speaker model with only the adversarial feedback by conditioning a self-supervised hidden representation of the generator to a conditional discriminator. This leads to better guidance for generator training. Moreover, we also propose adversarial style combination (ASC) for better generalization in the unseen speaking style and transcript, which can learn latent representations of the combined style embedding from multiple mel-spectrograms. Trained with ASC and feature matching, the MSG synthesizes a high-diversity mel-spectrogram by controlling and mixing the individual speaking styles (e.g., duration, pitch, and energy). The result shows that the MSG synthesizes a high-fidelity mel-spectrogram, which has almost the same naturalness MOS score as the ground-truth mel-spectrogram.

Downloads

Published

2021-05-18

How to Cite

Lee, S.-H., Yoon, H.-W., Noh, H.-R., Kim, J.-H., & Lee, S.-W. (2021). Multi-SpectroGAN: High-Diversity and High-Fidelity Spectrogram Generation with Adversarial Style Combination for Speech Synthesis. Proceedings of the AAAI Conference on Artificial Intelligence, 35(14), 13198-13206. Retrieved from https://ojs.aaai.org/index.php/AAAI/article/view/17559

Issue

Section

AAAI Technical Track on Speech and Natural Language Processing I