Robust Decentralized Multi-armed Bandits: From Corruption-Resilience to Byzantine-Resilience

Authors

  • Zicheng Hu East China Normal University
  • Yuchen Wang East China Normal University
  • Cheng Chen East China Normal University

DOI:

https://doi.org/10.1609/aaai.v40i26.39344

Abstract

Decentralized cooperative multi-agent multi-armed bandits (DeCMA2B) considers how multiple agents collaborate in a decentralized multi-armed bandit setting. Though this problem has been extensively studied in previous work, most existing methods remain susceptible to various adversarial attacks. In this paper, we first study DeCMA2B with adversarial corruption, where an adversary can corrupt reward observations of all agents with a limited corruption budget. We propose a robust algorithm, called DeMABAR, which ensures that each agent’s individual regret suffers only an additive term proportional to the corruption budget. Then we consider a more realistic scenario where the adversary can only attack a small number of agents. Our theoretical analysis shows that the DeMABAR algorithm can also almost completely eliminate the influence of adversarial attacks and is inherently robust in the Byzantine setting, where an unknown fraction of the agents can be Byzantine, i.e., may arbitrarily select arms and communicate wrong information. We also conduct numerical experiments to illustrate the robustness and effectiveness of the proposed method.

Published

2026-03-14

How to Cite

Hu, Z., Wang, Y., & Chen, C. (2026). Robust Decentralized Multi-armed Bandits: From Corruption-Resilience to Byzantine-Resilience. Proceedings of the AAAI Conference on Artificial Intelligence, 40(26), 21912–21920. https://doi.org/10.1609/aaai.v40i26.39344

Issue

Section

AAAI Technical Track on Machine Learning III