DART: Adaptive Accept Reject Algorithm for Non-Linear Combinatorial Bandits


  • Mridul Agarwal Purdue University
  • Vaneet Aggarwal Purdue University
  • Abhishek Kumar Umrawal Purdue University
  • Chris Quinn Iowa State University




Online Learning & Bandits, Learning Theory, Sequential Decision Making


We consider the bandit problem of selecting K out of N arms at each time step. The joint reward can be a non-linear function of the rewards of the selected individual arms. The direct use of a multi-armed bandit algorithm requires choosing among all possible combinations, making the action space large. To simplify the problem, existing works on combinatorial bandits typically assume feedback as a linear function of individual rewards. In this paper, we prove the lower bound for top-K subset selection with bandit feedback with possibly correlated rewards. We present a novel algorithm for the combinatorial setting without using individual arm feedback or requiring linearity of the reward function. Additionally, our algorithm works on correlated rewards of individual arms. Our algorithm, aDaptive Accept RejecT (DART), sequentially finds good arms and eliminates bad arms based on confidence bounds. DART is computationally efficient and uses storage linear in N. Further, DART achieves a regret bound of Õ(K√KNT) for a time horizon T, which matches the lower bound in bandit feedback up to a factor of √log 2NT. When applied to the problem of cross-selling optimization and maximizing the mean of individual rewards, the performance of the proposed algorithm surpasses that of state-of-the-art algorithms. We also show that DART significantly outperforms existing methods for both linear and non-linear joint reward environments.




How to Cite

Agarwal, M., Aggarwal, V., Umrawal, A. K., & Quinn, C. (2021). DART: Adaptive Accept Reject Algorithm for Non-Linear Combinatorial Bandits. Proceedings of the AAAI Conference on Artificial Intelligence, 35(8), 6557-6565. https://doi.org/10.1609/aaai.v35i8.16812



AAAI Technical Track on Machine Learning I