Jointly Improving the Sample and Communication Complexities in Decentralized Stochastic Minimax Optimization

Authors

  • Xuan Zhang Pennsylvania State University
  • Gabriel Mancino-Ball Rensselaer Polytechnic Institute
  • Necdet Serhat Aybat Pennsylvania State University
  • Yangyang Xu Rensselaer Polytechnic Institute

DOI:

https://doi.org/10.1609/aaai.v38i18.30076

Keywords:

SO: Non-convex Optimization, ML: Distributed Machine Learning & Federated Learning, ML: Optimization, SO: Distributed Search

Abstract

We propose a novel single-loop decentralized algorithm, DGDA-VR, for solving the stochastic nonconvex strongly-concave minimax problems over a connected network of agents, which are equipped with stochastic first-order oracles to estimate their local gradients. DGDA-VR, incorporating variance reduction, achieves O(ε^−3) oracle complexity and O(ε^−2) communication complexity without resorting to multi-communication rounds – both are optimal, i.e., matching the lower bounds for this class of problems. Since DGDA-VR does not require multiple communication rounds, it is applicable to a broader range of decentralized computational environments. To the best of our knowledge, this is the first distributed method using a single communication round in each iteration to jointly optimize the oracle and communication complexities for the problem considered here.

Published

2024-03-24

How to Cite

Zhang, X., Mancino-Ball, G., Aybat, N. S., & Xu, Y. (2024). Jointly Improving the Sample and Communication Complexities in Decentralized Stochastic Minimax Optimization. Proceedings of the AAAI Conference on Artificial Intelligence, 38(18), 20865-20873. https://doi.org/10.1609/aaai.v38i18.30076

Issue

Section

AAAI Technical Track on Search and Optimization