Learning Word Vectors Efficiently Using Shared Representations and Document Representations
DOI:
https://doi.org/10.1609/aaai.v29i1.9711Keywords:
Knowledge Representation, Machine Learning, Statistical Learning, Data MiningAbstract
We propose some better word embedding models based on vLBL model and ivLBL model by sharing representations between context and target words and using document representations. Our proposed models are much simpler which have almost half less parameters than the state-of-the-art methods. We achieve better results on word analogy task than the best ones reported before using significantly less training data and computing time.
Downloads
Published
2015-03-04
How to Cite
Luo, Q., & Xu, W. (2015). Learning Word Vectors Efficiently Using Shared Representations and Document Representations. Proceedings of the AAAI Conference on Artificial Intelligence, 29(1). https://doi.org/10.1609/aaai.v29i1.9711
Issue
Section
Student Abstract Track