Sensitivity Analysis of Deep Neural Networks


  • Hai Shu The University of Texas Anderson Cancer Center
  • Hongtu Zhu University of North Carolina, Chapel Hill



Deep neural networks (DNNs) have achieved superior performance in various prediction tasks, but can be very vulnerable to adversarial examples or perturbations. Therefore, it is crucial to measure the sensitivity of DNNs to various forms of perturbations in real applications. We introduce a novel perturbation manifold and its associated influence measure to quantify the effects of various perturbations on DNN classifiers. Such perturbations include various external and internal perturbations to input samples and network parameters. The proposed measure is motivated by information geometry and provides desirable invariance properties. We demonstrate that our influence measure is useful for four model building tasks: detecting potential ‘outliers’, analyzing the sensitivity of model architectures, comparing network sensitivity between training and test sets, and locating vulnerable areas. Experiments show reasonably good performance of the proposed measure for the popular DNN models ResNet50 and DenseNet121 on CIFAR10 and MNIST datasets.




How to Cite

Shu, H., & Zhu, H. (2019). Sensitivity Analysis of Deep Neural Networks. Proceedings of the AAAI Conference on Artificial Intelligence, 33(01), 4943-4950.



AAAI Technical Track: Machine Learning