Coordinate Descent on the Orthogonal Group for Recurrent Neural Network Training

Authors

  • Estelle Massart University of Oxford, UK National Physical Laboratory, UK
  • Vinayak Abrol Indraprastha Institute of Technology Delhi

DOI:

https://doi.org/10.1609/aaai.v36i7.20742

Keywords:

Machine Learning (ML)

Abstract

We address the poor scalability of learning algorithms for orthogonal recurrent neural networks via the use of stochastic coordinate descent on the orthogonal group, leading to a cost per iteration that increases linearly with the number of recurrent states. This contrasts with the cubic dependency of typical feasible algorithms such as stochastic Riemannian gradient descent, which prohibits the use of big network architectures. Coordinate descent rotates successively two columns of the recurrent matrix. When the coordinate (i.e., indices of rotated columns) is selected uniformly at random at each iteration, we prove convergence of the algorithm under standard assumptions on the loss function, stepsize and minibatch noise. In addition, we numerically show that the Riemannian gradient has an approximately sparse structure. Leveraging this observation, we propose a variant of our proposed algorithm that relies on the Gauss-Southwell coordinate selection rule. Experiments on a benchmark recurrent neural network training problem show that the proposed approach is a very promising step towards the training of orthogonal recurrent neural networks with big architectures.

Downloads

Published

2022-06-28

How to Cite

Massart, E., & Abrol, V. (2022). Coordinate Descent on the Orthogonal Group for Recurrent Neural Network Training. Proceedings of the AAAI Conference on Artificial Intelligence, 36(7), 7744-7751. https://doi.org/10.1609/aaai.v36i7.20742

Issue

Section

AAAI Technical Track on Machine Learning II