Avocodo: Generative Adversarial Network for Artifact-Free Vocoder


  • Taejun Bak AI Center, NCSOFT, Seongnam, Korea
  • Junmo Lee SK Telecom, Seoul, Korea
  • Hanbin Bae Samsung Research, Seoul, Korea
  • Jinhyeok Yang Supertone Inc., Seoul, Korea
  • Jae-Sung Bae Samsung Research, Seoul, Korea
  • Young-Sun Joo AI Center, NCSOFT, Seongnam, Korea




SNLP: Speech and Multimodality, SNLP: Generation


Neural vocoders based on the generative adversarial neural network (GAN) have been widely used due to their fast inference speed and lightweight networks while generating high-quality speech waveforms. Since the perceptually important speech components are primarily concentrated in the low-frequency bands, most GAN-based vocoders perform multi-scale analysis that evaluates downsampled speech waveforms. This multi-scale analysis helps the generator improve speech intelligibility. However, in preliminary experiments, we discovered that the multi-scale analysis which focuses on the low-frequency bands causes unintended artifacts, e.g., aliasing and imaging artifacts, which degrade the synthesized speech waveform quality. Therefore, in this paper, we investigate the relationship between these artifacts and GAN-based vocoders and propose a GAN-based vocoder, called Avocodo, that allows the synthesis of high-fidelity speech with reduced artifacts. We introduce two kinds of discriminators to evaluate speech waveforms in various perspectives: a collaborative multi-band discriminator and a sub-band discriminator. We also utilize a pseudo quadrature mirror filter bank to obtain downsampled multi-band speech waveforms while avoiding aliasing. According to experimental results, Avocodo outperforms baseline GAN-based vocoders, both objectively and subjectively, while reproducing speech with fewer artifacts.




How to Cite

Bak, T., Lee, J., Bae, H., Yang, J., Bae, J.-S., & Joo, Y.-S. (2023). Avocodo: Generative Adversarial Network for Artifact-Free Vocoder. Proceedings of the AAAI Conference on Artificial Intelligence, 37(11), 12562-12570. https://doi.org/10.1609/aaai.v37i11.26479



AAAI Technical Track on Speech & Natural Language Processing