On the Softmax Bottleneck of Recurrent Language Models

Authors

  • Dwarak Govind Parthiban University of Ottawa
  • Yongyi Mao University of Ottawa
  • Diana Inkpen University of Ottawa

DOI:

https://doi.org/10.1609/aaai.v35i15.17608

Keywords:

Language Models, Interpretaility & Analysis of NLP Models, Representation Learning

Abstract

Recent research has pointed to a limitation of word-level neural language models with softmax outputs. This limitation, known as the softmax bottleneck refers to the inability of these models to produce high-rank log probability (log P) matrices. Various solutions have been proposed to break this bottleneck, including Mixture of Softmaxes, SigSoftmax, and Linear Monotonic Softmax with Piecewise Linear Increasing Functions. They were reported to offer better performance in terms of perplexity on test data. A natural perception from these results is a strong positive correlation between the rank of the log P matrix and the model's performance. In this work, we show via an extensive empirical study that such a correlation is fairly weak and that the high-rank of the log P matrix is neither necessary nor sufficient for better test perplexity. Although our results are empirical, they are established in part via the construction of a rich family of models, which we call Generalized SigSoftmax. They are able to create diverse ranks for the log P matrices. We also present an investigation as to why the proposed solutions achieve better performance.

Downloads

Published

2021-05-18

How to Cite

Parthiban, D. G. ., Mao, Y., & Inkpen, D. (2021). On the Softmax Bottleneck of Recurrent Language Models. Proceedings of the AAAI Conference on Artificial Intelligence, 35(15), 13640-13647. https://doi.org/10.1609/aaai.v35i15.17608

Issue

Section

AAAI Technical Track on Speech and Natural Language Processing II