Unsupervised Measure of Word Similarity: How to Outperform Co-Occurrence and Vector Cosine in VSMs

Authors

  • Enrico Santus The Hong Kong Polytechnic University
  • Alessandro Lenci University of Pisa
  • Tin-Shing Chiu The Hong Kong Polytechnic University
  • Qin Lu The Hong Kong Polytechnic University
  • Chu-Ren Huang The Hong Kong Polytechnic University

DOI:

https://doi.org/10.1609/aaai.v30i1.9932

Keywords:

Semantic Relations, Semantics, Hypernymy, Entailment, Classifier, Featurese, Unsupervised, Vector Space Models, VSMs, Distributional Semantic Models, DSMs

Abstract

In this paper, we claim that vector cosine – which is generally considered among the most efficient unsupervised measures for identifying word similarity in Vector Space Models – can be outperformed by an unsupervised measure that calculates the extent of the intersection among the most mutually dependent contexts of the target words. To prove it, we describe and evaluate APSyn, a variant of the Average Precision that, without any optimization, outperforms the vector cosine and the co-occurrence on the standard ESL test set, with an improvement ranging between +9.00% and +17.98%, depending on the number of chosen top contexts.

Downloads

Published

2016-03-05

How to Cite

Santus, E., Lenci, A., Chiu, T.-S., Lu, Q., & Huang, C.-R. (2016). Unsupervised Measure of Word Similarity: How to Outperform Co-Occurrence and Vector Cosine in VSMs. Proceedings of the AAAI Conference on Artificial Intelligence, 30(1). https://doi.org/10.1609/aaai.v30i1.9932