Incrementally Learning the Hierarchical Softmax Function for Neural Language Models

Authors

  • Hao Peng Beihang University
  • Jianxin Li Beihang University
  • Yangqiu Song Hong Kong University of Science and Technology
  • Yaopeng Liu Beihang University

DOI:

https://doi.org/10.1609/aaai.v31i1.10994

Keywords:

Incremental Learning, Word Representation, CBOW, Skip-graw

Abstract

Neural network language models (NNLMs) have attracted a lot of attention recently. In this paper, we present a training method that can incrementally train the hierarchical softmax function for NNMLs. We split the cost function to model old and update corpora separately, and factorize the objective function for the hierarchical softmax. Then we provide a new stochastic gradient based method to update all the word vectors and parameters, by comparing the old tree generated based on the old corpus and the new tree generated based on the combined (old and update) corpus. Theoretical analysis shows that the mean square error of the parameter vectors can be bounded by a function of the number of changed words related to the parameter node. Experimental results show that incremental training can save a lot of time. The smaller the update corpus is, the faster the update training process is, where an up to 30 times speedup has been achieved. We also use both word similarity/relatedness tasks and dependency parsing task as our benchmarks to evaluate the correctness of the updated word vectors.

Downloads

Published

2017-02-12

How to Cite

Peng, H., Li, J., Song, Y., & Liu, Y. (2017). Incrementally Learning the Hierarchical Softmax Function for Neural Language Models. Proceedings of the AAAI Conference on Artificial Intelligence, 31(1). https://doi.org/10.1609/aaai.v31i1.10994