Robustness Guarantees for Bayesian Inference with Gaussian Processes


  • Luca Cardelli Microsoft Research Cambridge
  • Marta Kwiatkowska University of Oxford
  • Luca Laurenti University of Oxford
  • Andrea Patane University of Oxford



Bayesian inference and Gaussian processes are widely used in applications ranging from robotics and control to biological systems. Many of these applications are safety-critical and require a characterization of the uncertainty associated with the learning model and formal guarantees on its predictions. In this paper we define a robustness measure for Bayesian inference against input perturbations, given by the probability that, for a test point and a compact set in the input space containing the test point, the prediction of the learning model will remain δ−close for all the points in the set, for δ > 0. Such measures can be used to provide formal probabilistic guarantees for the absence of adversarial examples. By employing the theory of Gaussian processes, we derive upper bounds on the resulting robustness by utilising the Borell-TIS inequality, and propose algorithms for their computation. We evaluate our techniques on two examples, a GP regression problem and a fully-connected deep neural network, where we rely on weak convergence to GPs to study adversarial examples on the MNIST dataset.




How to Cite

Cardelli, L., Kwiatkowska, M., Laurenti, L., & Patane, A. (2019). Robustness Guarantees for Bayesian Inference with Gaussian Processes. Proceedings of the AAAI Conference on Artificial Intelligence, 33(01), 7759-7768.



AAAI Technical Track: Reasoning under Uncertainty