AdaLoss: A Computationally-Efficient and Provably Convergent Adaptive Gradient Method

Authors

  • Xiaoxia Wu Microsoft
  • Yuege Xie The University of Texas at Austin
  • Simon Shaolei Du University of Washington
  • Rachel Ward The University of Texas at Austin

DOI:

https://doi.org/10.1609/aaai.v36i8.20848

Keywords:

Machine Learning (ML)

Abstract

We propose a computationally-friendly adaptive learning rate schedule, ``AdaLoss", which directly uses the information of the loss function to adjust the stepsize in gradient descent methods. We prove that this schedule enjoys linear convergence in linear regression. Moreover, we extend the to the non-convex regime, in the context of two-layer over-parameterized neural networks. If the width is sufficiently large (polynomially), then AdaLoss converges robustly to the global minimum in polynomial time. We numerically verify the theoretical results and extend the scope of the numerical experiments by considering applications in LSTM models for text clarification and policy gradients for control problems.

Downloads

Published

2022-06-28

How to Cite

Wu, X., Xie, Y., Du, S. S., & Ward, R. (2022). AdaLoss: A Computationally-Efficient and Provably Convergent Adaptive Gradient Method. Proceedings of the AAAI Conference on Artificial Intelligence, 36(8), 8691-8699. https://doi.org/10.1609/aaai.v36i8.20848

Issue

Section

AAAI Technical Track on Machine Learning III