Training and Evaluating Improved Dependency-Based Word Embeddings


  • Chen Li BeiHang University
  • Jianxin Li BeiHang University
  • Yangqiu Song Hong Kong University of Science and Technology
  • Ziwei Lin BeiHang University


Word embeddings, Dependency, Context composition


Word embedding has been widely used in many natural language processing tasks. In this paper, we focus on learning word embeddings through selective higher-order relationships in sentences to improve the embeddings to be less sensitive to local context and more accurate in capturing semantic compositionality. We present a novel multi-order dependency-based strategy to composite and represent the context under several essential constraints. In order to realize selective learning from the word contexts, we automatically assign the strengths of different dependencies between co-occurred words in the stochastic gradient descent process. We evaluate and analyze our proposed approach using several direct and indirect tasks for word embeddings. Experimental results demonstrate that our embeddings are competitive to or better than state-of-the-art methods and significantly outperform other methods in terms of context stability. The output weights and representations of dependencies obtained in our embedding model conform to most of the linguistic characteristics and are valuable for many downstream tasks.




How to Cite

Li, C., Li, J., Song, Y., & Lin, Z. (2018). Training and Evaluating Improved Dependency-Based Word Embeddings. Proceedings of the AAAI Conference on Artificial Intelligence, 32(1). Retrieved from