Adaptive and Universal Algorithms for Variational Inequalities with Optimal Convergence

Authors

  • Alina Ene Boston University
  • Huy Lê Nguyễn Northeastern University

DOI:

https://doi.org/10.1609/aaai.v36i6.20609

Keywords:

Machine Learning (ML)

Abstract

We develop new adaptive algorithms for variational inequalities with monotone operators, which capture many problems of interest, notably convex optimization and convex-concave saddle point problems. Our algorithms automatically adapt to unknown problem parameters such as the smoothness and the norm of the operator, and the variance of the stochastic evaluation oracle. We show that our algorithms are universal and simultaneously achieve the optimal convergence rates in the non-smooth, smooth, and stochastic settings. The convergence guarantees of our algorithms improve over existing adaptive methods and match the optimal non-adaptive algorithms. Additionally, prior works require that the optimization domain is bounded. In this work, we remove this restriction and give algorithms for unbounded domains that are adaptive and universal. Our general proof techniques can be used for many variants of the algorithm using one or two operator evaluations per iteration. The classical methods based on the ExtraGradient/MirrorProx algorithm require two operator evaluations per iteration, which is the dominant factor in the running time in many settings.

Downloads

Published

2022-06-28

How to Cite

Ene, A., & Nguyễn, H. L. (2022). Adaptive and Universal Algorithms for Variational Inequalities with Optimal Convergence. Proceedings of the AAAI Conference on Artificial Intelligence, 36(6), 6559-6567. https://doi.org/10.1609/aaai.v36i6.20609

Issue

Section

AAAI Technical Track on Machine Learning I