Bayesian Policy Search for Multi-Agent Role Discovery

Authors

  • Aaron Wilson Oregon State University
  • Alan Fern Oregon State University
  • Prasad Tadepalli Oregon State University

DOI:

https://doi.org/10.1609/aaai.v24i1.7679

Keywords:

Reinforcement Learning, Bayesian Reinforcement Learning, Stochastic Simulation

Abstract

Bayesian inference is an appealing approach for leveraging prior knowledge in reinforcement learning (RL). In this paper we describe an algorithm for discovering different classes of roles for agents via Bayesian inference. In particular, we develop a Bayesian policy search approach for Multi-Agent RL (MARL), which is model-free and allows for priors on policy parameters. We present a novel optimization algorithm based on hybrid MCMC, which leverages both the prior and gradient information estimated from trajectories. Our experiments in a complex real-time strategy game demonstrate the effective discovery of roles from supervised trajectories, the use of discovered roles for successful transfer to similar tasks, and the discovery of roles through reinforcement learning.

Downloads

Published

2010-07-03

How to Cite

Wilson, A., Fern, A., & Tadepalli, P. (2010). Bayesian Policy Search for Multi-Agent Role Discovery. Proceedings of the AAAI Conference on Artificial Intelligence, 24(1), 624-629. https://doi.org/10.1609/aaai.v24i1.7679