Fake it Till You Make it: Self-Supervised Semantic Shifts for Monolingual Word Embedding Tasks
Keywords:Applications, Unsupervised & Self-Supervised Learning
AbstractThe use of language is subject to variation over time as well as across social groups and knowledge domains, leading to differences even in the monolingual scenario. Such variation in word usage is often called lexical semantic change (LSC). The goal of LSC is to characterize and quantify language variations with respect to word meaning, to measure how distinct two language sources are (that is, people or language models). Because there is hardly any data available for such a task, most solutions involve unsupervised methods to align two embeddings and predict semantic change with respect to a distance measure. To that end, we propose a self-supervised approach to model lexical semantic change based on the perturbation of word vectors in the input corpora. We show that our method can be used for the detection of semantic change with any alignment method. Furthermore, it can be used to choose the landmark words to use in alignment and can lead to substantial improvements over the existing techniques for alignment. We illustrate the utility of our techniques using experimental results on three different datasets, involving words with the same or different meanings. Our methods not only provide significant improvements but also can lead to novel findings for the LSC problem.
How to Cite
Gruppi, M., Chen, P.-Y., & Adali, S. (2021). Fake it Till You Make it: Self-Supervised Semantic Shifts for Monolingual Word Embedding Tasks. Proceedings of the AAAI Conference on Artificial Intelligence, 35(14), 12893-12901. Retrieved from https://ojs.aaai.org/index.php/AAAI/article/view/17525
AAAI Technical Track on Speech and Natural Language Processing I