Unsupervised Transfer Learning for Spoken Language Understanding in Intelligent Agents

Authors

  • Aditya Siddhant Carnegie Mellon University
  • Anuj Goyal Amazon
  • Angeliki Metallinou Amazon

DOI:

https://doi.org/10.1609/aaai.v33i01.33014959

Abstract

User interaction with voice-powered agents generates large amounts of unlabeled utterances. In this paper, we explore techniques to efficiently transfer the knowledge from these unlabeled utterances to improve model performance on Spoken Language Understanding (SLU) tasks. We use Embeddings from Language Model (ELMo) to take advantage of unlabeled data by learning contextualized word representations. Additionally, we propose ELMo-Light (ELMoL), a faster and simpler unsupervised pre-training method for SLU. Our findings suggest unsupervised pre-training on a large corpora of unlabeled utterances leads to significantly better SLU performance compared to training from scratch and it can even outperform conventional supervised transfer. Additionally, we show that the gains from unsupervised transfer techniques can be further improved by supervised transfer. The improvements are more pronounced in low resource settings and when using only 1000 labeled in-domain samples, our techniques match the performance of training from scratch on 10-15x more labeled in-domain data.

Downloads

Published

2019-07-17

How to Cite

Siddhant, A., Goyal, A., & Metallinou, A. (2019). Unsupervised Transfer Learning for Spoken Language Understanding in Intelligent Agents. Proceedings of the AAAI Conference on Artificial Intelligence, 33(01), 4959-4966. https://doi.org/10.1609/aaai.v33i01.33014959

Issue

Section

AAAI Technical Track: Machine Learning