Enhancing Zero-Shot Multi-Speaker TTS with Negated Speaker Representations


  • Yejin Jeon POSTECH
  • Yunsu Kim aiXplain, Inc.
  • Gary Geunbae Lee POSTECH




NLP: Speech


Zero-shot multi-speaker TTS aims to synthesize speech with the voice of a chosen target speaker without any fine-tuning. Prevailing methods, however, encounter limitations at adapting to new speakers of out-of-domain settings, primarily due to inadequate speaker disentanglement and content leakage. To overcome these constraints, we propose an innovative negation feature learning paradigm that models decoupled speaker attributes as deviations from the complete audio representation by utilizing the subtraction operation. By eliminating superfluous content information from the speaker representation, our negation scheme not only mitigates content leakage, thereby enhancing synthesis robustness, but also improves speaker fidelity. In addition, to facilitate the learning of diverse speaker attributes, we leverage multi-stream Transformers, which retain multiple hypotheses and instigate a training paradigm akin to ensemble learning. To unify these hypotheses and realize the final speaker representation, we employ attention pooling. Finally, in light of the imperative to generate target text utterances in the desired voice, we adopt adaptive layer normalizations to effectively fuse the previously generated speaker representation with the target text representations, as opposed to mere concatenation of the text and audio modalities. Extensive experiments and validations substantiate the efficacy of our proposed approach in preserving and harnessing speaker-specific attributes vis-à-vis alternative baseline models.



How to Cite

Jeon, Y., Kim, Y., & Lee , G. G. . (2024). Enhancing Zero-Shot Multi-Speaker TTS with Negated Speaker Representations. Proceedings of the AAAI Conference on Artificial Intelligence, 38(16), 18336-18344. https://doi.org/10.1609/aaai.v38i16.29793



AAAI Technical Track on Natural Language Processing I