Decentralized Multi-Agent Linear Bandits with Safety Constraints

Authors

  • Sanae Amani University of California, Los Angeles
  • Christos Thrampoulidis University of British Columbia, Vancouver

Keywords:

Online Learning & Bandits, Multiagent Learning, Learning Theory, Reinforcement Learning

Abstract

We study decentralized stochastic linear bandits, where a network of N agents acts cooperatively to efficiently solve a linear bandit-optimization problem over a d-dimensional space. For this problem, we propose DLUCB: a fully decentralized algorithm that minimizes the cumulative regret over the entire network. At each round of the algorithm each agent chooses its actions following an upper confidence bound (UCB) strategy and agents share information with their immediate neighbors through a carefully designed consensus procedure that repeats over cycles. Our analysis adjusts the duration of these communication cycles ensuring near-optimal regret performance O(d \log{NT}\sqrt{NT}) at a communication rate of O(dN^2) per round. The structure of the network affects the regret performance via a small additive term – coined the regret of delay – that depends on the spectral gap of the underlying graph. Notably, our results apply to arbitrary network topologies without a requirement for a dedicated agent acting as a server. In consideration of situations with high communication cost, we propose RC-DLUCB: a modification of DLUCB with rare communication among agents. The new algorithm trades off regret performance for a significantly reduced total communication cost of O(d^3N^5/2) over all T rounds. Finally, we show that our ideas extend naturally to the emerging, albeit more challenging, setting of safe bandits. For the recently studied problem of linear bandits with unknown linear safety constraints, we propose the first safe decentralized algorithm. Our study contributes towards applying bandit techniques in safety-critical distributed systems that repeatedly deal with unknown stochastic environments. We present numerical simulations for various network topologies that corroborate our theoretical findings.

Downloads

Published

2021-05-18

How to Cite

Amani, S., & Thrampoulidis, C. (2021). Decentralized Multi-Agent Linear Bandits with Safety Constraints. Proceedings of the AAAI Conference on Artificial Intelligence, 35(8), 6627-6635. Retrieved from https://ojs.aaai.org/index.php/AAAI/article/view/16820

Issue

Section

AAAI Technical Track on Machine Learning I