Better Parameter-Free Stochastic Optimization with ODE Updates for Coin-Betting

Authors

  • Keyi Chen Boston University
  • John Langford Microsoft Research
  • Francesco Orabona Boston University

DOI:

https://doi.org/10.1609/aaai.v36i6.20573

Keywords:

Machine Learning (ML)

Abstract

Parameter-free stochastic gradient descent (PFSGD) algorithms do not require setting learning rates while achieving optimal theoretical performance. In practical applications, however, there remains an empirical gap between tuned stochastic gradient descent (SGD) and PFSGD. In this paper, we close the empirical gap with a new parameter-free algorithm based on continuous-time Coin-Betting on truncated models. The new update is derived through the solution of an Ordinary Differential Equation (ODE) and solved in a closed form. We show empirically that this new parameter-free algorithm outperforms algorithms with the ``best default'' learning rates and almost matches the performance of finely tuned baselines without anything to tune.

Downloads

Published

2022-06-28

How to Cite

Chen, K., Langford, J., & Orabona, F. (2022). Better Parameter-Free Stochastic Optimization with ODE Updates for Coin-Betting. Proceedings of the AAAI Conference on Artificial Intelligence, 36(6), 6239-6247. https://doi.org/10.1609/aaai.v36i6.20573

Issue

Section

AAAI Technical Track on Machine Learning I