Spell Once, Summon Anywhere: A Two-Level Open-Vocabulary Language Model

Authors

  • Sabrina J. Mielke Johns Hopkins University
  • Jason Eisner Johns Hopkins University

DOI:

https://doi.org/10.1609/aaai.v33i01.33016843

Abstract

We show how the spellings of known words can help us deal with unknown words in open-vocabulary NLP tasks. The method we propose can be used to extend any closedvocabulary generative model, but in this paper we specifically consider the case of neural language modeling. Our Bayesian generative story combines a standard RNN language model (generating the word tokens in each sentence) with an RNNbased spelling model (generating the letters in each word type). These two RNNs respectively capture sentence structure and word structure, and are kept separate as in linguistics. By invoking the second RNN to generate spellings for novel words in context, we obtain an open-vocabulary language model. For known words, embeddings are naturally inferred by combining evidence from type spelling and token context. Comparing to baselines (including a novel strong baseline), we beat previous work and establish state-of-the-art results on multiple datasets.

Downloads

Published

2019-07-17

How to Cite

Mielke, S. J., & Eisner, J. (2019). Spell Once, Summon Anywhere: A Two-Level Open-Vocabulary Language Model. Proceedings of the AAAI Conference on Artificial Intelligence, 33(01), 6843-6850. https://doi.org/10.1609/aaai.v33i01.33016843

Issue

Section

AAAI Technical Track: Natural Language Processing