Definition Modeling: Learning to Define Word Embeddings in Natural Language

Authors

  • Thanapon Noraset Northwestern University
  • Chen Liang Northwestern University
  • Larry Birnbaum Northwestern University
  • Doug Downey Northwestern University

DOI:

https://doi.org/10.1609/aaai.v31i1.10996

Keywords:

word embedding, recurrent neural network, natural language generation, dictionary definition, semantics

Abstract

Distributed representations of words have been shown to capture lexical semantics, based on their effectiveness in word similarity and analogical relation tasks. But, these tasks only evaluate lexical semantics indirectly. In this paper, we study whether it is possible to utilize distributed representations to generate dictionary definitions of words, as a more direct and transparent representation of the embeddings' semantics. We introduce definition modeling, the task of generating a definition for a given word and its embedding. We present different definition model architectures based on recurrent neural networks, and experiment with the models over multiple data sets. Our results show that a model that controls dependencies between the word being defined and the definition words performs significantly better, and that a character-level convolution layer that leverages morphology can complement word-level embeddings. Our analysis reveals which components of our models contribute to accuracy. Finally, the errors made by a definition model may provide insight into the shortcomings of word embeddings.

Downloads

Published

2017-02-12

How to Cite

Noraset, T., Liang, C., Birnbaum, L., & Downey, D. (2017). Definition Modeling: Learning to Define Word Embeddings in Natural Language. Proceedings of the AAAI Conference on Artificial Intelligence, 31(1). https://doi.org/10.1609/aaai.v31i1.10996