An Efficient Algorithm for Deep Stochastic Contextual Bandits

Authors

  • Tan Zhu University of Connecticut
  • Guannan Liang University of Connecticut
  • Chunjiang Zhu University of Connecticut
  • Haining Li University of Connecticut
  • Jinbo Bi University of Connecticut

DOI:

https://doi.org/10.1609/aaai.v35i12.17335

Keywords:

Online Learning & Bandits, (Deep) Neural Network Algorithms, Reinforcement Learning

Abstract

In stochastic contextual bandit (SCB) problems, an agent selects an action based on certain observed context to maximize the cumulative reward over iterations. Recently there have been a few studies using a deep neural network (DNN) to predict the expected reward for an action, and the DNN is trained by a stochastic gradient based method. However, convergence analysis has been greatly ignored to examine whether and where these methods converge. In this work, we formulate the SCB that uses a DNN reward function as a non-convex stochastic optimization problem, and design a stage-wised stochastic gradient descent algorithm to optimize the problem and determine the action policy. We prove that with high probability, the action sequence chosen by our algorithm converges to a greedy action policy respecting a local optimal reward function. Extensive experiments have been performed to demonstrate the effectiveness and efficiency of the proposed algorithm on multiple real-world datasets.

Downloads

Published

2021-05-18

How to Cite

Zhu, T., Liang, G., Zhu, C., Li, H., & Bi, J. (2021). An Efficient Algorithm for Deep Stochastic Contextual Bandits. Proceedings of the AAAI Conference on Artificial Intelligence, 35(12), 11193-11201. https://doi.org/10.1609/aaai.v35i12.17335

Issue

Section

AAAI Technical Track on Machine Learning V