Decentralized Gradient-Free Methods for Stochastic Non-smooth Non-convex Optimization
DOI:
https://doi.org/10.1609/aaai.v38i16.29697Keywords:
MAS: Multiagent Learning, MAS: Agent Communication, MAS: Agent/AI Theories and Architectures, MAS: Applications, MAS: Multiagent Planning, MAS: Multiagent Systems under Uncertainty, MAS: Other Foundations of Multi Agent Systems, MAS: Teamwork, SO: Non-convex OptimizationAbstract
We consider decentralized gradient-free optimization of minimizing Lipschitz continuous functions that satisfy neither smoothness nor convexity assumption. We propose two novel gradient-free algorithms, the Decentralized Gradient-Free Method (DGFM) and its variant, the Decentralized Gradient-Free Method+ (DGFM+). Based on the techniques of randomized smoothing and gradient tracking, DGFM requires the computation of the zeroth-order oracle of a single sample in each iteration, making it less demanding in terms of computational resources for individual computing nodes. Theoretically, DGFM achieves a complexity of O(d^(3/2)δ^(-1)ε^(-4)) for obtaining a (δ,ε)-Goldstein stationary point. DGFM+, an advanced version of DGFM, incorporates variance reduction to further improve the convergence behavior. It samples a mini-batch at each iteration and periodically draws a larger batch of data, which improves the complexity to O(d^(3/2)δ^(-1)ε^(-3)). Moreover, experimental results underscore the empirical advantages of our proposed algorithms when applied to real-world datasets.Downloads
Published
2024-03-24
How to Cite
Lin, Z., Xia, J., Deng, Q., & Luo, L. (2024). Decentralized Gradient-Free Methods for Stochastic Non-smooth Non-convex Optimization. Proceedings of the AAAI Conference on Artificial Intelligence, 38(16), 17477-17486. https://doi.org/10.1609/aaai.v38i16.29697
Issue
Section
AAAI Technical Track on Multiagent Systems