Safe Linear Stochastic Bandits


  • Kia Khezeli Cornell University
  • Eilyan Bitar Cornell University



We introduce the safe linear stochastic bandit framework—a generalization of linear stochastic bandits—where, in each stage, the learner is required to select an arm with an expected reward that is no less than a predetermined (safe) threshold with high probability. We assume that the learner initially has knowledge of an arm that is known to be safe, but not necessarily optimal. Leveraging on this assumption, we introduce a learning algorithm that systematically combines known safe arms with exploratory arms to safely expand the set of safe arms over time, while facilitating safe greedy exploitation in subsequent stages. In addition to ensuring the satisfaction of the safety constraint at every stage of play, the proposed algorithm is shown to exhibit an expected regret that is no more than O(√T log(T)) after T stages of play.




How to Cite

Khezeli, K., & Bitar, E. (2020). Safe Linear Stochastic Bandits. Proceedings of the AAAI Conference on Artificial Intelligence, 34(06), 10202-10209.



AAAI Technical Track: Reasoning under Uncertainty