Successor Features Based Multi-Agent RL for Event-Based Decentralized MDPs


  • Tarun Gupta Indian Institute of Technology Hyderabad
  • Akshat Kumar Singapore Management University
  • Praveen Paruchuri Indian Institute of Technology Hyderabad



Decentralized MDPs (Dec-MDPs) provide a rigorous framework for collaborative multi-agent sequential decisionmaking under uncertainty. However, their computational complexity limits the practical impact. To address this, we focus on a class of Dec-MDPs consisting of independent collaborating agents that are tied together through a global reward function that depends upon their entire histories of states and actions to accomplish joint tasks. To overcome scalability barrier, our main contributions are: (a) We propose a new actor-critic based Reinforcement Learning (RL) approach for event-based Dec-MDPs using successor features (SF) which is a value function representation that decouples the dynamics of the environment from the rewards; (b) We then present Dec-ESR (Decentralized Event based Successor Representation) which generalizes learning for event-based Dec-MDPs using SF within an end-to-end deep RL framework; (c) We also show that Dec-ESR allows useful transfer of information on related but different tasks, hence bootstraps the learning for faster convergence on new tasks; (d) For validation purposes, we test our approach on a large multi-agent coverage problem which models schedule coordination of agents in a real urban subway network and achieves better quality solutions than previous best approaches.




How to Cite

Gupta, T., Kumar, A., & Paruchuri, P. (2019). Successor Features Based Multi-Agent RL for Event-Based Decentralized MDPs. Proceedings of the AAAI Conference on Artificial Intelligence, 33(01), 6054-6061.



AAAI Technical Track: Multiagent Systems