On the Verification of Neural ODEs with Stochastic Guarantees

Authors

  • Sophie Grunbacher Technische Universität Wien (TU Wien)
  • Ramin Hasani Massachusetts Institute of Technology (MIT) Technische Universität Wien (TU Wien)
  • Mathias Lechner Institute of Science and Technology Austria (IST Austria)
  • Jacek Cyranka University of Warsaw
  • Scott A. Smolka Stony Brook University
  • Radu Grosu Technische Universität Wien (TU Wien)

Keywords:

Safety, Robustness & Trustworthiness, (Deep) Neural Network Learning Theory, Sampling/Simulation-based Search

Abstract

We show that Neural ODEs, an emerging class of time-continuous neural networks, can be verified by solving a set of global-optimization problems. For this purpose, we introduce Stochastic Lagrangian Reachability (SLR), an abstraction-based technique for constructing a tight Reachtube (an over-approximation of the set of reachable states over a given time-horizon), and provide stochastic guarantees in the form of confidence intervals for the Reachtube bounds. SLR inherently avoids the infamous wrapping effect (accumulation of over-approximation errors) by performing local optimization steps to expand safe regions instead of repeatedly forward-propagating them as is done by deterministic reachability methods. To enable fast local optimizations, we introduce a novel forward-mode adjoint sensitivity method to compute gradients without the need for backpropagation. Finally, we establish asymptotic and non-asymptotic convergence rates for SLR.

Downloads

Published

2021-05-18

How to Cite

Grunbacher, S., Hasani, R., Lechner, M., Cyranka, J., Smolka, S. A., & Grosu, R. (2021). On the Verification of Neural ODEs with Stochastic Guarantees. Proceedings of the AAAI Conference on Artificial Intelligence, 35(13), 11525-11535. Retrieved from https://ojs.aaai.org/index.php/AAAI/article/view/17372

Issue

Section

AAAI Technical Track on Philosophy and Ethics of AI