Tightening Robustness Verification of Convolutional Neural Networks with Fine-Grained Linear Approximation

Authors

  • Yiting Wu Shanghai Key Laboratory for Trustworthy Computing, East China Normal University
  • Min Zhang Shanghai Key Laboratory for Trustworthy Computing, East China Normal University Shanghai Institute of Intelligent Science and Technology, Tongji University

Keywords:

Safety, Robustness & Trustworthiness

Abstract

The robustness of neural networks can be quantitatively indicated by a lower bound within which any perturbation does not alter the original input’s classification result. A certified lower bound is also a criterion to evaluate the performance of robustness verification approaches. In this paper, we present a tighter linear approximation approach for the robustness verification of Convolutional Neural Networks (CNNs). By the tighter approximation, we can tighten the robustness verification of CNNs, i.e., proving they are robust within a larger 10 perturbation distance. Furthermore, our approach is applicable to general sigmoid-like activation functions. We implement DeepCert, the resulting verification toolkit. We evaluate it with open-source benchmarks, including LeNet and the models trained on MNIST and CIFAR. Experimental results show that DeepCert outperforms other state-of-the-art robustness verification tools with at most 286.28% improvement to the certified lower bound and 1566.76 times speedup for the same neural networks.

Downloads

Published

2021-05-18

How to Cite

Wu, Y., & Zhang, M. (2021). Tightening Robustness Verification of Convolutional Neural Networks with Fine-Grained Linear Approximation. Proceedings of the AAAI Conference on Artificial Intelligence, 35(13), 11674-11681. Retrieved from https://ojs.aaai.org/index.php/AAAI/article/view/17388

Issue

Section

AAAI Technical Track on Philosophy and Ethics of AI