Shielding in Resource-Constrained Goal POMDPs
DOI:
https://doi.org/10.1609/aaai.v37i12.26715Keywords:
GeneralAbstract
We consider partially observable Markov decision processes (POMDPs) modeling an agent that needs a supply of a certain resource (e.g., electricity stored in batteries) to operate correctly. The resource is consumed by the agent's actions and can be replenished only in certain states. The agent aims to minimize the expected cost of reaching some goal while preventing resource exhaustion, a problem we call resource-constrained goal optimization (RSGO). We take a two-step approach to the RSGO problem. First, using formal methods techniques, we design an algorithm computing a shield for a given scenario: a procedure that observes the agent and prevents it from using actions that might eventually lead to resource exhaustion. Second, we augment the POMCP heuristic search algorithm for POMDP planning with our shields to obtain an algorithm solving the RSGO problem. We implement our algorithm and present experiments showing its applicability to benchmarks from the literature.Downloads
Published
2023-06-26
How to Cite
Ajdarów, M., Brlej, Šimon, & Novotný, P. (2023). Shielding in Resource-Constrained Goal POMDPs. Proceedings of the AAAI Conference on Artificial Intelligence, 37(12), 14674-14682. https://doi.org/10.1609/aaai.v37i12.26715
Issue
Section
AAAI Special Track on Safe and Robust AI