Learning Term Embeddings for Lexical Taxonomies
Keywords:Knowledge Representation Languages
AbstractLexical taxonomies, a special kind of knowledge graph, are essential for natural language understanding. This paper studies the problem of lexical taxonomy embedding. Most existing graph embedding methods are difficult to apply to lexical taxonomies since 1) they ignore implicit but important information, namely, sibling relations, which are not explicitly mentioned in lexical taxonomies and 2) there are lots of polysemous terms in lexical taxonomies. In this paper, we propose a novel method for lexical taxonomy embedding. This method optimizes an objective function that models both hyponym-hypernym relations and sibling relations. A term-level attention mechanism and a random walk based metric are then proposed to assist the modeling of these two kinds of relations, respectively. Finally, a novel training method based on curriculum learning is proposed. We conduct extensive experiments on two tasks to show that our approach outperforms other embedding methods and we use the learned term embeddings to enhance the performance of the state-of-the-art models that are based on BERT and RoBERTa on text classification.
How to Cite
Liu, J., Wang, M., Wang, C., Liang, J., Chen, L., Jiang, H., Xiao, Y., & Chen, Y. (2021). Learning Term Embeddings for Lexical Taxonomies. Proceedings of the AAAI Conference on Artificial Intelligence, 35(7), 6410-6417. https://doi.org/10.1609/aaai.v35i7.16795
AAAI Technical Track on Knowledge Representation and Reasoning