SeDepTTS: Enhancing the Naturalness via Semantic Dependency and Local Convolution for Text-to-Speech Synthesis
DOI:
https://doi.org/10.1609/aaai.v37i11.26523Keywords:
SNLP: Other Foundations of Speech & Natural Language Processing, SNLP: Speech and MultimodalityAbstract
Self-attention-based networks have obtained impressive performance in parallel training and global context modeling. However, it is weak in local dependency capturing, especially for data with strong local correlations such as utterances. Therefore, we will mine linguistic information of the original text based on a semantic dependency and the semantic relationship between nodes is regarded as prior knowledge to revise the distribution of self-attention. On the other hand, given the strong correlation between input characters, we introduce a one-dimensional (1-D) convolution neural network (CNN) producing query(Q) and value(V) in the self-attention mechanism for a better fusion of local contextual information. Then, we migrate this variant of the self-attention networks to speech synthesis tasks and propose a non-autoregressive (NAR) neural Text-to-Speech (TTS): SeDepTTS. Experimental results show that our model yields good performance in speech synthesis. Specifically, the proposed method yields significant improvement for the processing of pause, stress, and intonation in speech.Downloads
Published
2023-06-26
How to Cite
Jiang, C., Gao, Y., Ng, W. W., Zhou, J., Zhong, J., & Zhen, H. (2023). SeDepTTS: Enhancing the Naturalness via Semantic Dependency and Local Convolution for Text-to-Speech Synthesis. Proceedings of the AAAI Conference on Artificial Intelligence, 37(11), 12959-12967. https://doi.org/10.1609/aaai.v37i11.26523
Issue
Section
AAAI Technical Track on Speech & Natural Language Processing