Understanding the Semantic Content of Sparse Word Embeddings Using a Commonsense Knowledge Base

Authors

  • Vanda Balogh University of Szeged
  • Gábor Berend University of Szeged
  • Dimitrios I. Diochnos University of Oklahoma
  • György Turán University of Illinois at Chicago

DOI:

https://doi.org/10.1609/aaai.v34i05.6235

Abstract

Word embeddings have developed into a major NLP tool with broad applicability. Understanding the semantic content of word embeddings remains an important challenge for additional applications. One aspect of this issue is to explore the interpretability of word embeddings. Sparse word embeddings have been proposed as models with improved interpretability. Continuing this line of research, we investigate the extent to which human interpretable semantic concepts emerge along the bases of sparse word representations. In order to have a broad framework for evaluation, we consider three general approaches for constructing sparse word representations, which are then evaluated in multiple ways. We propose a novel methodology to evaluate the semantic content of word embeddings using a commonsense knowledge base, applied here to the sparse case. This methodology is illustrated by two techniques using the ConceptNet knowledge base. The first approach assigns a commonsense concept label to the individual dimensions of the embedding space. The second approach uses a metric, derived by spreading activation, to quantify the coherence of coordinates along the individual axes. We also provide results on the relationship between the two approaches. The results show, for example, that in the individual dimensions of sparse word embeddings, words having high coefficients are more semantically related in terms of path lengths in the knowledge base than the ones having zero coefficients.

Downloads

Published

2020-04-03

How to Cite

Balogh, V., Berend, G., Diochnos, D. I., & Turán, G. (2020). Understanding the Semantic Content of Sparse Word Embeddings Using a Commonsense Knowledge Base. Proceedings of the AAAI Conference on Artificial Intelligence, 34(05), 7399-7406. https://doi.org/10.1609/aaai.v34i05.6235

Issue

Section

AAAI Technical Track: Natural Language Processing