Neural Policy Safety Verification via Predicate Abstraction: CEGAR

Authors

  • Marcel Vinzent Saarland University
  • Siddhant Sharma Indian Institute of Technology Delhi
  • Jöerg Hoffmann Saarland University German Research Center for Artificial Intelligence (DFKI)

DOI:

https://doi.org/10.1609/aaai.v37i12.26772

Keywords:

General

Abstract

Neural networks (NN) are an increasingly important representation of action policies pi. Recent work has extended predicate abstraction to prove safety of such pi, through policy predicate abstraction (PPA) which over-approximates the state space subgraph induced by pi. The advantage of PPA is that reasoning about the NN – calls to SMT solvers – is required only locally, at individual abstract state transitions, in contrast to bounded model checking (BMC) where SMT must reason globally about sequences of NN decisions. Indeed, it has been shown that PPA can outperform a simple BMC implementation. However, the abstractions underlying these results (i.e., the abstraction predicates) were supplied manually. Here we automate this step. We extend counterexample guided abstraction refinement (CEGAR) to PPA. This involves dealing with a new source of spuriousness in abstract unsafe paths, pertaining not to transition behavior but to the decisions of the neural network pi. We introduce two methods tackling this issue based on the states involved, and we show that global SMT calls deciding spuriousness exactly can be avoided. We devise algorithmic enhancements leveraging incremental computation and heuristic search. We show empirically that the resulting verification tool has significant advantages over an encoding into the state-of-the-art model checker nuXmv. In particular, ours is the only approach in our experiments that succeeds in proving policies safe.

Downloads

Published

2023-06-26

How to Cite

Vinzent, M., Sharma, S., & Hoffmann, J. (2023). Neural Policy Safety Verification via Predicate Abstraction: CEGAR. Proceedings of the AAAI Conference on Artificial Intelligence, 37(12), 15188-15196. https://doi.org/10.1609/aaai.v37i12.26772

Issue

Section

AAAI Special Track on Safe and Robust AI