Improving Domain-independent Cloud-Based Speech Recognition with Domain-Dependent Phonetic Post-Processing


  • Johannes Twiefel University of Hamburg
  • Timo Baumann University of Hamburg
  • Stefan Heinrich University of Hamburg
  • Stefan Wermter University of Hamburg



speech recognition, domain knowledge, cloud-based knowledge, phonetic distance


Automatic speech recognition (ASR) technology has been developed to such a level that off-the-shelf distributed speech recognition services are available (free of cost), which allow researchers to integrate speech into their applications with little development effort or expert knowledge leading to better results compared with previously used open-source tools. Often, however, such services do not accept language models or grammars but process free speech from any domain. While results are very good given the enormous size of the search space, results frequently contain out-of-domain words or constructs that cannot be understood by subsequent domain-dependent natural language understanding (NLU) components. We present a versatile post-processing technique based on phonetic distance that integrates domain knowledge with open-domain ASR results, leading to improved ASR performance. Notably, our technique is able to make use of domain restrictions using various degrees of domain knowledge, ranging from pure vocabulary restrictions via grammars or N-Grams to restrictions of the acceptable utterances. We present results for a variety of corpora (mainly from human-robot interaction) where our combined approach significantly outperforms Google ASR as well as a plain open-source ASR solution.




How to Cite

Twiefel, J., Baumann, T., Heinrich, S., & Wermter, S. (2014). Improving Domain-independent Cloud-Based Speech Recognition with Domain-Dependent Phonetic Post-Processing. Proceedings of the AAAI Conference on Artificial Intelligence, 28(1).



Main Track: NLP and Knowledge Representation