Integrating Knowledge Compilation with Reinforcement Learning for Routes

Authors

  • Jiajing Ling Singapore Management Univerisity
  • Kushagra Chandak Singapore Management Univerisity
  • Akshat Kumar Singapore Management University

Keywords:

Multi-agent Planning And Learning, Reinforcement Learning Using Planning (model-based, Bayesian, Deep, Etc.), Representations For Learned Models In Planning, Learning Domain And Action Models For Planning

Abstract

Sequential multiagent decision-making under partial observability and uncertainty poses several challenges. Although multiagent reinforcement learning (MARL) approaches have increased the scalability, addressing combinatorial domains is still challenging as random exploration by agents is unlikely to generate useful reward signals. We address cooperative multiagent pathfinding under uncertainty and partial observability where agents move from their respective sources to destinations while also satisfying constraints (e.g., visiting landmarks). Our main contributions include: (1) compiling domain knowledge such as underlying graph connectivity and domain constraints into propositional logic based decision diagrams, (2) developing modular techniques to integrate such knowledge with deep MARL algorithms, and (3) developing fast algorithms to query the compiled knowledge for accelerated episode simulation in RL. Empirically, our approach can tractably represent various types of domain constraints, and outperforms previous MARL approaches significantly both in terms of sample complexity and solution quality on a number of instances.

Downloads

Published

2021-05-17

How to Cite

Ling, J., Chandak, K., & Kumar, A. (2021). Integrating Knowledge Compilation with Reinforcement Learning for Routes . Proceedings of the International Conference on Automated Planning and Scheduling, 31(1), 542-550. Retrieved from https://ojs.aaai.org/index.php/ICAPS/article/view/16002