A One-Size-Fits-All Solution to Conservative Bandit Problems

Authors

  • Yihan Du Tsinghua University
  • Siwei Wang Tsinghua University
  • Longbo Huang Tsinghua Univeristy

DOI:

https://doi.org/10.1609/aaai.v35i8.16891

Keywords:

Online Learning & Bandits

Abstract

In this paper, we study a family of conservative bandit problems (CBPs) with sample-path reward constraints, i.e., the learner's reward performance must be at least as well as a given baseline at any time. We propose a One-Size-Fits-All solution to CBPs and present its applications to three encompassed problems, i.e. conservative multi-armed bandits (CMAB), conservative linear bandits (CLB) and conservative contextual combinatorial bandits (CCCB). Different from previous works which consider high probability constraints on the expected reward, we focus on a sample-path constraint on the actually received reward, and achieve better theoretical guarantees (T-independent additive regrets instead of T-dependent) and empirical performance. Furthermore, we extend the results and consider a novel conservative mean-variance bandit problem (MV-CBP), which measures the learning performance with both the expected reward and variability. For this extended problem, we provide a novel algorithm with O(1/T) normalized additive regrets (T-independent in the cumulative form) and validate this result through empirical evaluation.

Downloads

Published

2021-05-18

How to Cite

Du, Y., Wang, S., & Huang, L. (2021). A One-Size-Fits-All Solution to Conservative Bandit Problems. Proceedings of the AAAI Conference on Artificial Intelligence, 35(8), 7254-7261. https://doi.org/10.1609/aaai.v35i8.16891

Issue

Section

AAAI Technical Track on Machine Learning I