Fast Saturating Gate for Learning Long Time Scales with Recurrent Neural Networks

Authors

  • Kentaro Ohno NTT
  • Sekitoshi Kanai NTT
  • Yasutoshi Ida NTT

DOI:

https://doi.org/10.1609/aaai.v37i8.26117

Keywords:

ML: Deep Neural Architectures, ML: Deep Learning Theory, ML: Deep Neural Network Algorithms, ML: Time-Series/Data Streams

Abstract

Gate functions in recurrent models, such as an LSTM and GRU, play a central role in learning various time scales in modeling time series data by using a bounded activation function. However, it is difficult to train gates to capture extremely long time scales due to gradient vanishing of the bounded function for large inputs, which is known as the saturation problem. We closely analyze the relation between saturation of the gate function and efficiency of the training. We prove that the gradient vanishing of the gate function can be mitigated by accelerating the convergence of the saturating function, i.e., making the output of the function converge to 0 or 1 faster. Based on the analysis results, we propose a gate function called fast gate that has a doubly exponential convergence rate with respect to inputs by simple function composition. We empirically show that our method outperforms previous methods in accuracy and computational efficiency on benchmark tasks involving extremely long time scales.

Downloads

Published

2023-06-26

How to Cite

Ohno, K., Kanai, S., & Ida, Y. (2023). Fast Saturating Gate for Learning Long Time Scales with Recurrent Neural Networks. Proceedings of the AAAI Conference on Artificial Intelligence, 37(8), 9319-9326. https://doi.org/10.1609/aaai.v37i8.26117

Issue

Section

AAAI Technical Track on Machine Learning III