Gradient-Variation Bound for Online Convex Optimization with Constraints

Authors

  • Shuang Qiu Booth School of Business, the University of Chicago
  • Xiaohan Wei Meta Platforms, Inc.
  • Mladen Kolar Booth School of Business, the University of Chicago

DOI:

https://doi.org/10.1609/aaai.v37i8.26141

Keywords:

ML: Optimization, ML: Online Learning & Bandits

Abstract

We study online convex optimization with constraints consisting of multiple functional constraints and a relatively simple constraint set, such as a Euclidean ball. As enforcing the constraints at each time step through projections is computationally challenging in general, we allow decisions to violate the functional constraints but aim to achieve a low regret and cumulative violation of the constraints over a horizon of T time steps. First-order methods achieve an O(sqrt{T}) regret and an O(1) constraint violation, which is the best-known bound under the Slater's condition, but do not take into account the structural information of the problem. Furthermore, the existing algorithms and analysis are limited to Euclidean space. In this paper, we provide an instance-dependent bound for online convex optimization with complex constraints obtained by a novel online primal-dual mirror-prox algorithm. Our instance-dependent regret is quantified by the total gradient variation V_*(T) in the sequence of loss functions. The proposed algorithm works in general normed spaces and simultaneously achieves an O(sqrt{V_*(T)}) regret and an O(1) constraint violation, which is never worse than the best-known (O(sqrt{T}), O(1)) result and improves over previous works that applied mirror-prox-type algorithms for this problem achieving O(T^{2/3}) regret and constraint violation. Finally, our algorithm is computationally efficient, as it only performs mirror descent steps in each iteration instead of solving a general Lagrangian minimization problem.

Downloads

Published

2023-06-26

How to Cite

Qiu, S., Wei, X., & Kolar, M. (2023). Gradient-Variation Bound for Online Convex Optimization with Constraints. Proceedings of the AAAI Conference on Artificial Intelligence, 37(8), 9534-9542. https://doi.org/10.1609/aaai.v37i8.26141

Issue

Section

AAAI Technical Track on Machine Learning III