SPINE: SParse Interpretable Neural Embeddings

Authors

  • Anant Subramanian Carnegie Mellon University
  • Danish Pruthi Carnegie Mellon University
  • Harsh Jhamtani Carnegie Mellon University
  • Taylor Berg-Kirkpatrick Carnegie Mellon University
  • Eduard Hovy Carnegie Mellon University

DOI:

https://doi.org/10.1609/aaai.v32i1.11935

Keywords:

interpretability, representation learning, word embeddings, autoencoder

Abstract

Prediction without justification has limited utility. Much of the success of neural models can be attributed to their ability to learn rich, dense and expressive representations. While these representations capture the underlying complexity and latent trends in the data, they are far from being interpretable. We propose a novel variant of denoising k-sparse autoencoders that generates highly efficient and interpretable distributed word representations (word embeddings), beginning with existing word representations from state-of-the-art methods like GloVe and word2vec. Through large scale human evaluation, we report that our resulting word embedddings are much more interpretable than the original GloVe and word2vec embeddings. Moreover, our embeddings outperform existing popular word embeddings on a diverse suite of benchmark downstream tasks.

Downloads

Published

2018-04-26

How to Cite

Subramanian, A., Pruthi, D., Jhamtani, H., Berg-Kirkpatrick, T., & Hovy, E. (2018). SPINE: SParse Interpretable Neural Embeddings. Proceedings of the AAAI Conference on Artificial Intelligence, 32(1). https://doi.org/10.1609/aaai.v32i1.11935

Issue

Section

Main Track: NLP and Knowledge Representation