On Realizing Planning Programs in Domains with Dead-End States
DOI:
https://doi.org/10.1609/socs.v8i1.18431Abstract
Agent planning programs are finite-state programs, possibly containing loops, whose atomic instructions consist of a guard, a maintenance goal, and an achievement goal, which act as precondition-invariance-postcondition assertions in program specification. The execution of such programs requires generating plans that meet the goals specified in the atomic instructions, while respecting the program control flow. Recently, De Giacomo et al. (2016) presented a technique, based on iteratively solving classical planning problems with action costs, for realizing planning programs in deterministic domains. Such a technique works generally well for domains with no or very few dead-end states. In this paper, we propose an enhancement of this technique to handle deterministic domains that have potentially many dead-end states, and we study the effectiveness of our technique through an experimental analysis.