Thompson Sampling for Real-Valued Combinatorial Pure Exploration of Multi-Armed Bandit
DOI:
https://doi.org/10.1609/aaai.v38i13.29355Keywords:
ML: Online Learning & Bandits, SO: Combinatorial OptimizationAbstract
We study the real-valued combinatorial pure exploration of the multi-armed bandit (R-CPE-MAB) problem. In R-CPE-MAB, a player is given stochastic arms, and the reward of each arm follows an unknown distribution. In each time step, a player pulls a single arm and observes its reward. The player's goal is to identify the optimal action from a finite-sized real-valued action set with as few arm pulls as possible. Previous methods in the R-CPE-MAB require enumerating all of the feasible actions of the combinatorial optimization problem one is considering. In general, since the size of the action set grows exponentially large with respect to the number of arms, this is almost practically impossible when the number of arms is large. We introduce an algorithm named the Generalized Thompson Sampling Explore (GenTS-Explore) algorithm, which is the first algorithm that can work even when the size of the action set is exponentially large with respect to the number of arms. We also introduce a novel problem-dependent sample complexity lower bound of the R-CPE-MAB problem, and show that the GenTS-Explore algorithm achieves the optimal sample complexity up to a problem-dependent constant factor.Downloads
Published
2024-03-24
How to Cite
Nakamura, S., & Sugiyama, M. (2024). Thompson Sampling for Real-Valued Combinatorial Pure Exploration of Multi-Armed Bandit. Proceedings of the AAAI Conference on Artificial Intelligence, 38(13), 14414-14421. https://doi.org/10.1609/aaai.v38i13.29355
Issue
Section
AAAI Technical Track on Machine Learning IV