Stochastic Contextual Bandits with Long Horizon Rewards

Authors

  • Yuzhen Qin University of California, Riverside
  • Yingcong Li University of California, Riverside
  • Fabio Pasqualetti University of California, Riverside
  • Maryam Fazel University of Washington
  • Samet Oymak University of California, Riverside University of Michigan

DOI:

https://doi.org/10.1609/aaai.v37i8.26140

Keywords:

ML: Online Learning & Bandits, ML: Reinforcement Learning Theory

Abstract

The growing interest in complex decision-making and language modeling problems highlights the importance of sample-efficient learning over very long horizons. This work takes a step in this direction by investigating contextual linear bandits where the current reward depends on at most s prior actions and contexts (not necessarily consecutive), up to a time horizon of h. In order to avoid polynomial dependence on h, we propose new algorithms that leverage sparsity to discover the dependence pattern and arm parameters jointly. We consider both the data-poor (T= h) regimes and derive respective regret upper bounds O(d square-root(sT) +min(q, T) and O( square-root(sdT) ), with sparsity s, feature dimension d, total time horizon T, and q that is adaptive to the reward dependence pattern. Complementing upper bounds, we also show that learning over a single trajectory brings inherent challenges: While the dependence pattern and arm parameters form a rank-1 matrix, circulant matrices are not isometric over rank-1 manifolds and sample complexity indeed benefits from the sparse reward dependence structure. Our results necessitate a new analysis to address long-range temporal dependencies across data and avoid polynomial dependence on the reward horizon h. Specifically, we utilize connections to the restricted isometry property of circulant matrices formed by dependent sub-Gaussian vectors and establish new guarantees that are also of independent interest.

Downloads

Published

2023-06-26

How to Cite

Qin, Y., Li, Y., Pasqualetti, F., Fazel, M., & Oymak, S. (2023). Stochastic Contextual Bandits with Long Horizon Rewards. Proceedings of the AAAI Conference on Artificial Intelligence, 37(8), 9525-9533. https://doi.org/10.1609/aaai.v37i8.26140

Issue

Section

AAAI Technical Track on Machine Learning III