Imitation Upper Confidence Bound for Bandits on a Graph
DOI:
https://doi.org/10.1609/aaai.v32i1.12183Keywords:
Bandits, Bandit Problems, Multiagent Learning, Reinforcement Learning, Statistical LearningAbstract
We consider a graph of interconnected agents implementing a common policy and each playing a bandit problem with identical reward distributions. We restrict the information propagated in the graph such that agents can uniquely observe each other's actions. We propose an extension of the Upper Confidence Bound (UCB) algorithm to this setting and empirically demonstrate that our solution improves the performance over UCB according to multiple metrics and within various graph configurations.
Downloads
Published
2018-04-29
How to Cite
Lupu, A., & Precup, D. (2018). Imitation Upper Confidence Bound for Bandits on a Graph. Proceedings of the AAAI Conference on Artificial Intelligence, 32(1). https://doi.org/10.1609/aaai.v32i1.12183
Issue
Section
Student Abstract Track