Future-Guided Incremental Transformer for Simultaneous Translation

Authors

  • Shaolei Zhang Key Laboratory of Intelligent Information Processing, Institute of Computing Technology, Chinese Academy of Sciences (ICT/CAS) University of Chinese Academy of Sciences
  • Yang Feng Key Laboratory of Intelligent Information Processing, Institute of Computing Technology, Chinese Academy of Sciences (ICT/CAS) University of Chinese Academy of Sciences
  • Liangyou Li Huawei Noah's Ark Lab

DOI:

https://doi.org/10.1609/aaai.v35i16.17696

Keywords:

Machine Translation & Multilinguality

Abstract

Simultaneous translation (ST) starts translations synchronously while reading source sentences, and is used in many online scenarios. The previous wait-k policy is concise and achieved good results in ST. However, wait-k policy faces two weaknesses: low training speed caused by the recalculation of hidden states and lack of future source information to guide training. For the low training speed, we propose an incremental Transformer with an average embedding layer (AEL) to accelerate the speed of calculation of the hidden states during training. For future-guided training, we propose a conventional Transformer as the teacher of the incremental Transformer, and try to invisibly embed some future information in the model through knowledge distillation. We conducted experiments on Chinese-English and German-English simultaneous translation tasks and compared with the wait-k policy to evaluate the proposed method. Our method can effectively increase the training speed by about 28 times on average at different k and implicitly embed some predictive abilities in the model, achieving better translation quality than wait-k baseline.

Downloads

Published

2021-05-18

How to Cite

Zhang, S., Feng, Y., & Li, L. (2021). Future-Guided Incremental Transformer for Simultaneous Translation. Proceedings of the AAAI Conference on Artificial Intelligence, 35(16), 14428-14436. https://doi.org/10.1609/aaai.v35i16.17696

Issue

Section

AAAI Technical Track on Speech and Natural Language Processing III