Partner-Aware Algorithms in Decentralized Cooperative Bandit Teams

Authors

  • Erdem Biyik Stanford University
  • Anusha Lalitha Stanford University
  • Rajarshi Saha Stanford University
  • Andrea Goldsmith Princeton University Stanford University
  • Dorsa Sadigh Stanford University

DOI:

https://doi.org/10.1609/aaai.v36i9.21158

Keywords:

Multiagent Systems (MAS), Humans And AI (HAI), Machine Learning (ML), Intelligent Robotics (ROB)

Abstract

When humans collaborate with each other, they often make decisions by observing others and considering the consequences that their actions may have on the entire team, instead of greedily doing what is best for just themselves. We would like our AI agents to effectively collaborate in a similar way by capturing a model of their partners. In this work, we propose and analyze a decentralized Multi-Armed Bandit (MAB) problem with coupled rewards as an abstraction of more general multi-agent collaboration. We demonstrate that naive extensions of single-agent optimal MAB algorithms fail when applied for decentralized bandit teams. Instead, we propose a Partner-Aware strategy for joint sequential decision-making that extends the well-known single-agent Upper Confidence Bound algorithm. We analytically show that our proposed strategy achieves logarithmic regret, and provide extensive experiments involving human-AI and human-robot collaboration to validate our theoretical findings. Our results show that the proposed partner-aware strategy outperforms other known methods, and our human subject studies suggest humans prefer to collaborate with AI agents implementing our partner-aware strategy.

Downloads

Published

2022-06-28

How to Cite

Biyik, E., Lalitha, A., Saha, R., Goldsmith, A., & Sadigh, D. (2022). Partner-Aware Algorithms in Decentralized Cooperative Bandit Teams. Proceedings of the AAAI Conference on Artificial Intelligence, 36(9), 9296-9303. https://doi.org/10.1609/aaai.v36i9.21158

Issue

Section

AAAI Technical Track on Multiagent Systems