Lattice Recurrent Unit: Improving Convergence and Statistical Efficiency for Sequence Modeling

Authors

  • Chaitanya Ahuja Carnegie Mellon University
  • Louis-Philippe Morency Carnegie Mellon University

Keywords:

recurrent unit, sequence modeling, temporal, language model, lattice

Abstract

Recurrent neural networks have shown remarkable success in modeling sequences. However low resource situations still adversely affect the generalizability of these models. We introduce a new family of models, called Lattice Recurrent Units (LRU), to address the challenge of learning deep multi-layer recurrent models with limited resources.  LRU models achieve this goal by creating distinct (but coupled) flow of information inside the units: a first flow along time dimension and a second flow along depth dimension. It also offers a symmetry in how information can flow horizontally and vertically.  We analyze the effects of decoupling three different components of our LRU model: Reset Gate, Update Gate and Projected State. We evaluate this family of new LRU models on computational convergence rates and statistical efficiency.Our experiments are performed on four publicly-available datasets, comparing with Grid-LSTM and Recurrent Highway networks. Our results show that LRU has better empirical computational convergence rates and statistical efficiency values, along with learning more accurate language models.

Downloads

Published

2018-04-27

How to Cite

Ahuja, C., & Morency, L.-P. (2018). Lattice Recurrent Unit: Improving Convergence and Statistical Efficiency for Sequence Modeling. Proceedings of the AAAI Conference on Artificial Intelligence, 32(1). Retrieved from https://ojs.aaai.org/index.php/AAAI/article/view/12025