On Online Optimization: Dynamic Regret Analysis of Strongly Convex and Smooth Problems

Authors

  • Ting-Jui Chang Texas A&M University
  • Shahin Shahrampour Texas A&M University

DOI:

https://doi.org/10.1609/aaai.v35i8.16858

Keywords:

Online Learning & Bandits, Optimization

Abstract

The regret bound of dynamic online learning algorithms is often expressed in terms of the variation in the function sequence (V_T) and/or the path-length of the minimizer sequence after T rounds. For strongly convex and smooth functions, Zhang et al. (2017) establish the squared path-length of the minimizer sequence (C*_{2,T}) as a lower bound on regret. They also show that online gradient descent (OGD) achieves this lower bound using multiple gradient queries per round. In this paper, we focus on unconstrained online optimization. We first show that a preconditioned variant of OGD achieves O(min{C*_T,C*_{2,T}}) with one gradient query per round (C*_T refers to the normal path-length). We then propose online optimistic Newton (OON) method for the case when the first and second order information of the function sequence is predictable. The regret bound of OON is captured via the quartic path-length of the minimizer sequence (C*_{4,T}), which can be much smaller than C*_{2,T}. We finally show that by using multiple gradients for OGD, we can achieve an upper bound of O(min{C*_{2,T},V_T}) on regret.

Downloads

Published

2021-05-18

How to Cite

Chang, T.-J., & Shahrampour, S. (2021). On Online Optimization: Dynamic Regret Analysis of Strongly Convex and Smooth Problems. Proceedings of the AAAI Conference on Artificial Intelligence, 35(8), 6966-6973. https://doi.org/10.1609/aaai.v35i8.16858

Issue

Section

AAAI Technical Track on Machine Learning I