Synchronous Speech Recognition and Speech-to-Text Translation with Interactive Decoding


  • Yuchen Liu CASIA
  • Jiajun Zhang CASIA
  • Hao Xiong Baidu
  • Long Zhou CASIA
  • Zhongjun He Baidu
  • Hua Wu Baidu
  • Haifeng Wang Baidu
  • Chengqing Zong CASIA



Speech-to-text translation (ST), which translates source language speech into target language text, has attracted intensive attention in recent years. Compared to the traditional pipeline system, the end-to-end ST model has potential benefits of lower latency, smaller model size, and less error propagation. However, it is notoriously difficult to implement such a model without transcriptions as intermediate. Existing works generally apply multi-task learning to improve translation quality by jointly training end-to-end ST along with automatic speech recognition (ASR). However, different tasks in this method cannot utilize information from each other, which limits the improvement. Other works propose a two-stage model where the second model can use the hidden state from the first one, but its cascade manner greatly affects the efficiency of training and inference process. In this paper, we propose a novel interactive attention mechanism which enables ASR and ST to perform synchronously and interactively in a single model. Specifically, the generation of transcriptions and translations not only relies on its previous outputs but also the outputs predicted in the other task. Experiments on TED speech translation corpora have shown that our proposed model can outperform strong baselines on the quality of speech translation and achieve better speech recognition performances as well.




How to Cite

Liu, Y., Zhang, J., Xiong, H., Zhou, L., He, Z., Wu, H., Wang, H., & Zong, C. (2020). Synchronous Speech Recognition and Speech-to-Text Translation with Interactive Decoding. Proceedings of the AAAI Conference on Artificial Intelligence, 34(05), 8417-8424.



AAAI Technical Track: Natural Language Processing