On the Verification of Neural ODEs with Stochastic Guarantees
DOI:
https://doi.org/10.1609/aaai.v35i13.17372Keywords:
Safety, Robustness & Trustworthiness, (Deep) Neural Network Learning Theory, Sampling/Simulation-based SearchAbstract
We show that Neural ODEs, an emerging class of time-continuous neural networks, can be verified by solving a set of global-optimization problems. For this purpose, we introduce Stochastic Lagrangian Reachability (SLR), an abstraction-based technique for constructing a tight Reachtube (an over-approximation of the set of reachable states over a given time-horizon), and provide stochastic guarantees in the form of confidence intervals for the Reachtube bounds. SLR inherently avoids the infamous wrapping effect (accumulation of over-approximation errors) by performing local optimization steps to expand safe regions instead of repeatedly forward-propagating them as is done by deterministic reachability methods. To enable fast local optimizations, we introduce a novel forward-mode adjoint sensitivity method to compute gradients without the need for backpropagation. Finally, we establish asymptotic and non-asymptotic convergence rates for SLR.Downloads
Published
2021-05-18
How to Cite
Grunbacher, S., Hasani, R., Lechner, M., Cyranka, J., Smolka, S. A., & Grosu, R. (2021). On the Verification of Neural ODEs with Stochastic Guarantees. Proceedings of the AAAI Conference on Artificial Intelligence, 35(13), 11525-11535. https://doi.org/10.1609/aaai.v35i13.17372
Issue
Section
AAAI Technical Track on Philosophy and Ethics of AI