SoftCorrect: Error Correction with Soft Detection for Automatic Speech Recognition


  • Yichong Leng University of Science and Technology of China
  • Xu Tan Microsoft Research Asia
  • Wenjie Liu Microsoft Azure Speech
  • Kaitao Song Microsoft Research Asia
  • Rui Wang Microsoft Research Asia
  • Xiang-Yang Li University of Science and Technology of China
  • Tao Qin Microsoft Research Asia
  • Ed Lin Microsoft Azure Speech
  • Tie-Yan Liu Microsoft Research Asia



SNLP: Speech and Multimodality, SNLP: Applications


Error correction in automatic speech recognition (ASR) aims to correct those incorrect words in sentences generated by ASR models. Since recent ASR models usually have low word error rate (WER), to avoid affecting originally correct tokens, error correction models should only modify incorrect words, and therefore detecting incorrect words is important for error correction. Previous works on error correction either implicitly detect error words through target-source attention or CTC (connectionist temporal classification) loss, or explicitly locate specific deletion/substitution/insertion errors. However, implicit error detection does not provide clear signal about which tokens are incorrect and explicit error detection suffers from low detection accuracy. In this paper, we propose SoftCorrect with a soft error detection mechanism to avoid the limitations of both explicit and implicit error detection. Specifically, we first detect whether a token is correct or not through a probability produced by a dedicatedly designed language model, and then design a constrained CTC loss that only duplicates the detected incorrect tokens to let the decoder focus on the correction of error tokens. Compared with implicit error detection with CTC loss, SoftCorrect provides explicit signal about which words are incorrect and thus does not need to duplicate every token but only incorrect tokens; compared with explicit error detection, SoftCorrect does not detect specific deletion/substitution/insertion errors but just leaves it to CTC loss. Experiments on AISHELL-1 and Aidatatang datasets show that SoftCorrect achieves 26.1% and 9.4% CER reduction respectively, outperforming previous works by a large margin, while still enjoying fast speed of parallel generation.




How to Cite

Leng, Y., Tan, X., Liu, W., Song, K., Wang, R., Li, X.-Y., Qin, T., Lin, E., & Liu, T.-Y. (2023). SoftCorrect: Error Correction with Soft Detection for Automatic Speech Recognition. Proceedings of the AAAI Conference on Artificial Intelligence, 37(11), 13034-13042.



AAAI Technical Track on Speech & Natural Language Processing