Debugging a Policy: Automatic Action-Policy Testing in AI Planning
DOI:
https://doi.org/10.1609/icaps.v32i1.19820Keywords:
Action Policies, Testing, Heuristic FunctionsAbstract
Testing is a promising way to gain trust in neural action policies π. Previous work on policy testing in sequential decision making targeted environment behavior leading to failure conditions. But if the failure is unavoidable given that behavior, then π is not actually to blame. For a situation to qualify as a "bug" in π, there must be an alternative policy π' that does better. We introduce a generic policy testing framework based on that intuition. This raises the bug confirmation problem, deciding whether or not a state is a bug. We analyze the use of optimistic and pessimistic bounds for the design of test oracles approximating that problem. We contribute an implementation of our framework in classical planning, experimenting with several test oracles and with random-walk methods generating test states biased to poor policy performance and/or state novelty. We evaluate these techniques on policies π learned with ASNets. We find that they are able to effectively identify bugs in these π, and that our random-walk biases improve over uninformed baselines.Downloads
Published
2022-06-13
How to Cite
Steinmetz, M., Fišer, D., Eniser, H. F., Ferber, P., Gros, T. P., Heim, P., Höller, D., Schuler, X., Wüstholz, V., Christakis, M., & Hoffmann, J. (2022). Debugging a Policy: Automatic Action-Policy Testing in AI Planning. Proceedings of the International Conference on Automated Planning and Scheduling, 32(1), 353-361. https://doi.org/10.1609/icaps.v32i1.19820