Tackling Safe and Efficient Multi-Agent Reinforcement Learning via Dynamic Shielding (Student Abstract)

Authors

  • Wenli Xiao The Chinese University of Hong Kong, Shenzhen
  • Yiwei Lyu Carnegie Mellon University
  • John M. Dolan Carnegie Mellon University

DOI:

https://doi.org/10.1609/aaai.v37i13.27041

Keywords:

Reinforcement Learning, Multiagent Systems, Robotics, Safe Reinforcement Learning

Abstract

Multi-agent Reinforcement Learning (MARL) has been increasingly used in safety-critical applications but has no safety guarantees, especially during training. In this paper, we propose dynamic shielding, a novel decentralized MARL framework to ensure safety in both training and deployment phases. Our framework leverages Shield, a reactive system running in parallel with the reinforcement learning algorithm to monitor and correct agents' behavior. In our algorithm, shields dynamically split and merge according to the environment state in order to maintain decentralization and avoid conservative behaviors while enjoying formal safety guarantees. We demonstrate the effectiveness of MARL with dynamic shielding in the mobile navigation scenario.

Downloads

Published

2023-09-06

How to Cite

Xiao, W., Lyu, Y., & Dolan, J. M. (2023). Tackling Safe and Efficient Multi-Agent Reinforcement Learning via Dynamic Shielding (Student Abstract). Proceedings of the AAAI Conference on Artificial Intelligence, 37(13), 16362-16363. https://doi.org/10.1609/aaai.v37i13.27041