Online Submodular Maximization via Online Convex Optimization

Authors

  • Tareq Si Salem Northeastern University
  • Gözde Özcan Northeastern University
  • Iasonas Nikolaou Boston University
  • Evimaria Terzi Boston University
  • Stratis Ioannidis Northeastern University

DOI:

https://doi.org/10.1609/aaai.v38i13.29425

Keywords:

ML: Online Learning & Bandits, ML: Optimization, SO: Combinatorial Optimization, SO: Non-convex Optimization

Abstract

We study monotone submodular maximization under general matroid constraints in the online setting. We prove that online optimization of a large class of submodular functions, namely, threshold potential functions, reduces to online convex optimization (OCO). This is precisely because functions in this class admit a concave relaxation; as a result, OCO policies, coupled with an appropriate rounding scheme, can be used to achieve sublinear regret in the combinatorial setting. We also show that our reduction extends to many different versions of the online learning problem, including the dynamic regret, bandit, and optimistic-learning settings.

Published

2024-03-24

How to Cite

Si Salem, T., Özcan, G., Nikolaou, I., Terzi, E., & Ioannidis, S. (2024). Online Submodular Maximization via Online Convex Optimization. Proceedings of the AAAI Conference on Artificial Intelligence, 38(13), 15038-15046. https://doi.org/10.1609/aaai.v38i13.29425

Issue

Section

AAAI Technical Track on Machine Learning IV