Successor Features Based Multi-Agent RL for Event-Based Decentralized MDPs

Authors

  • Tarun Gupta Indian Institute of Technology Hyderabad
  • Akshat Kumar Singapore Management University
  • Praveen Paruchuri Indian Institute of Technology Hyderabad

DOI:

https://doi.org/10.1609/aaai.v33i01.33016054

Abstract

Decentralized MDPs (Dec-MDPs) provide a rigorous framework for collaborative multi-agent sequential decisionmaking under uncertainty. However, their computational complexity limits the practical impact. To address this, we focus on a class of Dec-MDPs consisting of independent collaborating agents that are tied together through a global reward function that depends upon their entire histories of states and actions to accomplish joint tasks. To overcome scalability barrier, our main contributions are: (a) We propose a new actor-critic based Reinforcement Learning (RL) approach for event-based Dec-MDPs using successor features (SF) which is a value function representation that decouples the dynamics of the environment from the rewards; (b) We then present Dec-ESR (Decentralized Event based Successor Representation) which generalizes learning for event-based Dec-MDPs using SF within an end-to-end deep RL framework; (c) We also show that Dec-ESR allows useful transfer of information on related but different tasks, hence bootstraps the learning for faster convergence on new tasks; (d) For validation purposes, we test our approach on a large multi-agent coverage problem which models schedule coordination of agents in a real urban subway network and achieves better quality solutions than previous best approaches.

Downloads

Published

2019-07-17

How to Cite

Gupta, T., Kumar, A., & Paruchuri, P. (2019). Successor Features Based Multi-Agent RL for Event-Based Decentralized MDPs. Proceedings of the AAAI Conference on Artificial Intelligence, 33(01), 6054-6061. https://doi.org/10.1609/aaai.v33i01.33016054

Issue

Section

AAAI Technical Track: Multiagent Systems