Improving End-to-End Speech Translation by Leveraging Auxiliary Speech and Text Data
DOI:
https://doi.org/10.1609/aaai.v37i11.26637Keywords:
SNLP: Speech and Multimodality, SNLP: Machine Translation & MultilingualityAbstract
We present a method for introducing a text encoder into pre-trained end-to-end speech translation systems. It enhances the ability of adapting one modality (i.e., source-language speech) to another (i.e., source-language text). Thus, the speech translation model can learn from both unlabeled and labeled data, especially when the source-language text data is abundant. Beyond this, we present a denoising method to build a robust text encoder that can deal with both normal and noisy text data. Our system sets new state-of-the-arts on the MuST-C En-De, En-Fr, and LibriSpeech En-Fr tasks.Downloads
Published
2023-06-26
How to Cite
Zhang, Y., Xu, C., Hu, B., Zhang, C., Xiao, T., & Zhu, J. (2023). Improving End-to-End Speech Translation by Leveraging Auxiliary Speech and Text Data. Proceedings of the AAAI Conference on Artificial Intelligence, 37(11), 13984-13992. https://doi.org/10.1609/aaai.v37i11.26637
Issue
Section
AAAI Technical Track on Speech & Natural Language Processing