Probabilistic Robustness Quantification of Neural Networks

Authors

  • Gopi Kishan Indian Institute of Technology, Roorkee, India

DOI:

https://doi.org/10.1609/aaai.v35i18.17979

Keywords:

Deep Learning, Statistical Learning, Probabilistic Reasoning

Abstract

Safety properties of neural networks are critical to their application in safety-critical domains. Quantification of their robustness against uncertainties is an upcoming area of research. In this work, we propose an approach for providing probabilistic guarantees on the performance of a trained neural network. We present two novel metrics for probabilistic verification on training data distribution and test dataset. Given a trained neural network, we quantify the probability of the model to make errors on a random sample drawn from the training data distribution. Second, from the output logits of a sample test point, we measure its p-value on the learned logit distribution to quantify the confidence of the model at this test point. We compare our results with softmax based metric using the black-box adversarial attacks on a simple CNN architecture trained for MNIST digit classification.

Downloads

Published

2021-05-18

How to Cite

Kishan, G. . (2021). Probabilistic Robustness Quantification of Neural Networks. Proceedings of the AAAI Conference on Artificial Intelligence, 35(18), 15966-15967. https://doi.org/10.1609/aaai.v35i18.17979

Issue

Section

AAAI Undergraduate Consortium