Factored Online Planning in Many-Agent POMDPs
DOI:
https://doi.org/10.1609/aaai.v38i16.29689Keywords:
MAS: Multiagent Systems under Uncertainty, MAS: Coordination and Collaboration, PRS: Planning with Markov Models (MDPs, POMDPs), MAS: Multiagent LearningAbstract
In centralized multi-agent systems, often modeled as multi-agent partially observable Markov decision processes (MPOMDPs), the action and observation spaces grow exponentially with the number of agents, making the value and belief estimation of single-agent online planning ineffective. Prior work partially tackles value estimation by exploiting the inherent structure of multi-agent settings via so-called coordination graphs. Additionally, belief estimation methods have been improved by incorporating the likelihood of observations into the approximation. However, the challenges of value estimation and belief estimation have only been tackled individually, which prevents existing methods from scaling to settings with many agents. Therefore, we address these challenges simultaneously. First, we introduce weighted particle filtering to a sample-based online planner for MPOMDPs. Second, we present a scalable approximation of the belief. Third, we bring an approach that exploits the typical locality of agent interactions to novel online planning algorithms for MPOMDPs operating on a so-called sparse particle filter tree. Our experimental evaluation against several state-of-the-art baselines shows that our methods (1) are competitive in settings with only a few agents and (2) improve over the baselines in the presence of many agents.Downloads
Published
2024-03-24
How to Cite
Galesloot, M. F. L., Simão, T. D., Junges, S., & Jansen, N. (2024). Factored Online Planning in Many-Agent POMDPs. Proceedings of the AAAI Conference on Artificial Intelligence, 38(16), 17407-17415. https://doi.org/10.1609/aaai.v38i16.29689
Issue
Section
AAAI Technical Track on Multiagent Systems