Combinatorial Pure Exploration with Full-Bandit or Partial Linear Feedback

Authors

  • Yihan Du IIIS, Tsinghua University
  • Yuko Kuroki The University of Tokyo / RIKEN
  • Wei Chen Microsoft

DOI:

https://doi.org/10.1609/aaai.v35i8.16892

Keywords:

Online Learning & Bandits

Abstract

In this paper, we first study the problem of combinatorial pure exploration with full-bandit feedback (CPE-BL), where a learner is given a combinatorial action space X \subseteq {0,1}^d, and in each round the learner pulls an action x \in X and receives a random reward with expectation x^T \theta, with \theta \in \R^d a latent and unknown environment vector. The objective is to identify the optimal action with the highest expected reward, using as few samples as possible. For CPE-BL, we design the first polynomial-time adaptive algorithm, whose sample complexity matches the lower bound (within a logarithmic factor) for a family of instances and has a light dependence of \Delta_min (the smallest gap between the optimal action and sub-optimal actions). Furthermore, we propose a novel generalization of CPE-BL with flexible feedback structures, called combinatorial pure exploration with partial linear feedback (CPE-PL), which encompasses several families of sub-problems including full-bandit feedback, semi-bandit feedback, partial feedback and nonlinear reward functions. In CPE-PL, each pull of action x reports a random feedback vector with expectation of M_x \theta , where M_x \in R^{m_x \times d} is a transformation matrix for x, and gains a random (possibly nonlinear) reward related to x. For CPE-PL, we develop the first polynomial-time algorithm, which simultaneously addresses limited feedback, general reward function and combinatorial action space (e.g., matroids, matchings and s-t paths), and provide its sample complexity analysis. Our empirical evaluation demonstrates that our algorithms run orders of magnitude faster than the existing ones, and our CPE-BL algorithm is robust across different \Delta_min settings while our CPE-PL algorithm is the first one returning correct answers for nonlinear reward functions.

Downloads

Published

2021-05-18

How to Cite

Du, Y., Kuroki, Y., & Chen, W. (2021). Combinatorial Pure Exploration with Full-Bandit or Partial Linear Feedback. Proceedings of the AAAI Conference on Artificial Intelligence, 35(8), 7262-7270. https://doi.org/10.1609/aaai.v35i8.16892

Issue

Section

AAAI Technical Track on Machine Learning I