ELLA-V: Stable Neural Codec Language Modeling with Alignment-Guided Sequence Reordering
DOI:
https://doi.org/10.1609/aaai.v39i24.34703Abstract
The language model (LM) approach based on acoustic and linguistic prompts, such as VALL-E, has achieved remarkable progress in the field of zero-shot audio generation. However, existing methods still have some limitations: 1) repetitions, transpositions, and omissions in the output synthesized speech due to limited alignment constraints between audio and phoneme tokens; 2) challenges of fine-grained control over the synthesized speech with autoregressive (AR) language model; 3) infinite silence generation due to the nature of AR-based decoding, especially under the greedy strategy. To alleviate these issues, we propose ELLA-V, a simple but efficient LM-based zero-shot text-to-speech (TTS) framework, which enables fine-grained control over synthesized audio at the phoneme level. The key to ELLA-V is interleaving sequences of acoustic and phoneme tokens, where phoneme tokens appear ahead of the corresponding acoustic tokens. The experimental findings reveal that our model outperforms baselines in terms of accuracy and delivers more stable results using both greedy and sampling-based decoding strategies.Downloads
Published
2025-04-11
How to Cite
Song, Y., Chen, Z., Wang, X., Ma, Z., & Chen, X. (2025). ELLA-V: Stable Neural Codec Language Modeling with Alignment-Guided Sequence Reordering. Proceedings of the AAAI Conference on Artificial Intelligence, 39(24), 25174–25182. https://doi.org/10.1609/aaai.v39i24.34703
Issue
Section
AAAI Technical Track on Natural Language Processing III