Circles are like Ellipses, or Ellipses are like Circles? Measuring the Degree of Asymmetry of Static and Contextual Word Embeddings and the Implications to Representation Learning
DOI:
https://doi.org/10.1609/aaai.v35i16.17701Keywords:
Lexical & Frame Semantics, Semantic ParsingAbstract
Human judgments of word similarity have been a popular method of evaluating the quality of word embedding. But it fails to measure the geometry properties such as asymmetry. For example, it is more natural to say ``Ellipses are like Circles'' than ``Circles are like Ellipses''. Such asymmetry has been observed from the word evocation experiment, where one word is used to recall another. This association data have been understudied for measuring embedding quality. In this paper, we use three well-known evocation datasets for the purpose and study both static embedding as well as contextual embedding, such as BERT. To fight for the dynamic nature of BERT embedding, we probe BERT's conditional probabilities as a language model, using a large number of Wikipedia contexts to derive a theoretically justifiable Bayesian asymmetry score. The result shows that the asymmetry judgment and similarity judgments disagree, and asymmetry judgment aligns with its strong performance on ``extrinsic evaluations''. This is the first time we can show contextual embeddings's strength on intrinsic evaluation, and the asymmetry judgment provides a new perspective to evaluate contextual embedding and new insights for representation learning.Downloads
Published
2021-05-18
How to Cite
Zhang, W., Campbell, M., Yu, Y., & Kumaravel, S. (2021). Circles are like Ellipses, or Ellipses are like Circles? Measuring the Degree of Asymmetry of Static and Contextual Word Embeddings and the Implications to Representation Learning. Proceedings of the AAAI Conference on Artificial Intelligence, 35(16), 14472-14480. https://doi.org/10.1609/aaai.v35i16.17701
Issue
Section
AAAI Technical Track on Speech and Natural Language Processing III