Improving End-to-End Speech Translation by Leveraging Auxiliary Speech and Text Data

Authors

  • Yuhao Zhang Northeastern University, China
  • Chen Xu Northeastern University, China
  • Bojie Hu Tencent Minority-Mandarin Translation, China
  • Chunliang Zhang Northeastern University, China NiuTrans Research, Shenyang, China
  • Tong Xiao Northeastern University, China NiuTrans Research, Shenyang, China
  • Jingbo Zhu Northeastern University, China NiuTrans Research, Shenyang, China

DOI:

https://doi.org/10.1609/aaai.v37i11.26637

Keywords:

SNLP: Speech and Multimodality, SNLP: Machine Translation & Multilinguality

Abstract

We present a method for introducing a text encoder into pre-trained end-to-end speech translation systems. It enhances the ability of adapting one modality (i.e., source-language speech) to another (i.e., source-language text). Thus, the speech translation model can learn from both unlabeled and labeled data, especially when the source-language text data is abundant. Beyond this, we present a denoising method to build a robust text encoder that can deal with both normal and noisy text data. Our system sets new state-of-the-arts on the MuST-C En-De, En-Fr, and LibriSpeech En-Fr tasks.

Downloads

Published

2023-06-26

How to Cite

Zhang, Y., Xu, C., Hu, B., Zhang, C., Xiao, T., & Zhu, J. (2023). Improving End-to-End Speech Translation by Leveraging Auxiliary Speech and Text Data. Proceedings of the AAAI Conference on Artificial Intelligence, 37(11), 13984-13992. https://doi.org/10.1609/aaai.v37i11.26637

Issue

Section

AAAI Technical Track on Speech & Natural Language Processing