Adapting Translation Models for Transcript Disfluency Detection


  • Qianqian Dong Chinese Academy of Sciences
  • Feng Wang Chinese Academy of Sciences
  • Zhen Yang Chinese Academy of Sciences
  • Wei Chen Chinese Academy of Sciences
  • Shuang Xu Casia
  • Bo Xu Casia



Transcript disfluency detection (TDD) is an important component of the real-time speech translation system, which arouses more and more interests in recent years. This paper presents our study on adapting neural machine translation (NMT) models for TDD. We propose a general training framework for adapting NMT models to TDD task rapidly. In this framework, the main structure of the model is implemented similar to the NMT model. Additionally, several extended modules and training techniques which are independent of the NMT model are proposed to improve the performance, such as the constrained decoding, denoising autoencoder initialization and a TDD-specific training object. With the proposed training framework, we achieve significant improvement. However, it is too slow in decoding to be practical. To build a feasible and production-ready solution for TDD, we propose a fast non-autoregressive TDD model following the non-autoregressive NMT model emerged recently. Even we do not assume the specific architecture of the NMT model, we build our TDD model on the basis of Transformer, which is the state-of-the-art NMT model. We conduct extensive experiments on the publicly available set, Switchboard, and in-house Chinese set. Experimental results show that the proposed model significantly outperforms previous state-ofthe-art models.




How to Cite

Dong, Q., Wang, F., Yang, Z., Chen, W., Xu, S., & Xu, B. (2019). Adapting Translation Models for Transcript Disfluency Detection. Proceedings of the AAAI Conference on Artificial Intelligence, 33(01), 6351-6358.



AAAI Technical Track: Natural Language Processing