Distributed Projection-Free Online Learning for Smooth and Convex Losses

Authors

  • Yibo Wang Nanjing University Peng Cheng Laboratory
  • Yuanyu Wan Zhejiang University
  • Shimao Zhang Nanjing University
  • Lijun Zhang Nanjing University Peng Cheng Laboratory

DOI:

https://doi.org/10.1609/aaai.v37i8.26218

Keywords:

ML: Online Learning & Bandits, ML: Optimization, ML: Time-Series/Data Streams

Abstract

We investigate the problem of distributed online convex optimization with complicated constraints, in which the projection operation could be the computational bottleneck. To avoid projections, distributed online projection-free methods have been proposed and attain an O(T^{3/4}) regret bound for general convex losses. However, they cannot utilize the smoothness condition, which has been exploited in the centralized setting to improve the regret. In this paper, we propose a new distributed online projection-free method with a tighter regret bound of O(T^{2/3}) for smooth and convex losses. Specifically, we first provide a distributed extension of Follow-the-Perturbed-Leader so that the smoothness can be utilized in the distributed setting. Then, we reduce the computational cost via sampling and blocking techniques. In this way, our method only needs to solve one linear optimization per round on average. Finally, we conduct experiments on benchmark datasets to verify the effectiveness of our proposed method.

Downloads

Published

2023-06-26

How to Cite

Wang, Y., Wan, Y., Zhang, S., & Zhang, L. (2023). Distributed Projection-Free Online Learning for Smooth and Convex Losses. Proceedings of the AAAI Conference on Artificial Intelligence, 37(8), 10226-10234. https://doi.org/10.1609/aaai.v37i8.26218

Issue

Section

AAAI Technical Track on Machine Learning III