Probabilistic Robustness Quantification of Neural Networks
Keywords:Deep Learning, Statistical Learning, Probabilistic Reasoning
AbstractSafety properties of neural networks are critical to their application in safety-critical domains. Quantification of their robustness against uncertainties is an upcoming area of research. In this work, we propose an approach for providing probabilistic guarantees on the performance of a trained neural network. We present two novel metrics for probabilistic verification on training data distribution and test dataset. Given a trained neural network, we quantify the probability of the model to make errors on a random sample drawn from the training data distribution. Second, from the output logits of a sample test point, we measure its p-value on the learned logit distribution to quantify the confidence of the model at this test point. We compare our results with softmax based metric using the black-box adversarial attacks on a simple CNN architecture trained for MNIST digit classification.
How to Cite
Kishan, G. . (2021). Probabilistic Robustness Quantification of Neural Networks. Proceedings of the AAAI Conference on Artificial Intelligence, 35(18), 15966-15967. Retrieved from https://ojs.aaai.org/index.php/AAAI/article/view/17979
AAAI Undergraduate Consortium