Exploring the Vulnerability of Deep Neural Networks: A Study of Parameter Corruption

Authors

  • Xu Sun Peking University
  • Zhiyuan Zhang Peking University
  • Xuancheng Ren Peking University
  • Ruixuan Luo Peking University
  • Liangyou Li Huawei Noah's Ark Lab

Keywords:

Safety, Robustness & Trustworthiness

Abstract

We argue that the vulnerability of model parameters is of crucial value to the study of model robustness and generalization but little research has been devoted to understanding this matter. In this work, we propose an indicator to measure the robustness of neural network parameters by exploiting their vulnerability via parameter corruption. The proposed indicator describes the maximum loss variation in the non-trivial worst-case scenario under parameter corruption. For practical purposes, we give a gradient-based estimation, which is far more effective than random corruption trials that can hardly induce the worst accuracy degradation. Equipped with theoretical support and empirical validation, we are able to systematically investigate the robustness of different model parameters and reveal vulnerability of deep neural networks that has been rarely paid attention to before. Moreover, we can enhance the models accordingly with the proposed adversarial corruption-resistant training, which not only improves the parameter robustness but also translates into accuracy elevation.

Downloads

Published

2021-05-18

How to Cite

Sun, X., Zhang, Z., Ren, X., Luo, R., & Li, L. (2021). Exploring the Vulnerability of Deep Neural Networks: A Study of Parameter Corruption. Proceedings of the AAAI Conference on Artificial Intelligence, 35(13), 11648-11656. Retrieved from https://ojs.aaai.org/index.php/AAAI/article/view/17385

Issue

Section

AAAI Technical Track on Philosophy and Ethics of AI