Towards Certificated Model Robustness Against Weight Perturbations

Authors

  • Tsui-Wei Weng Massachusetts Institute of Technology
  • Pu Zhao Northeastern University
  • Sijia Liu IBM Research
  • Pin-Yu Chen IBM Research
  • Xue Lin Northeastern University
  • Luca Daniel Massachusetts Institute of Technology

DOI:

https://doi.org/10.1609/aaai.v34i04.6105

Abstract

This work studies the sensitivity of neural networks to weight perturbations, firstly corresponding to a newly developed threat model that perturbs the neural network parameters. We propose an efficient approach to compute a certified robustness bound of weight perturbations, within which neural networks will not make erroneous outputs as desired by the adversary. In addition, we identify a useful connection between our developed certification method and the problem of weight quantization, a popular model compression technique in deep neural networks (DNNs) and a ‘must-try’ step in the design of DNN inference engines on resource constrained computing platforms, such as mobiles, FPGA, and ASIC. Specifically, we study the problem of weight quantization – weight perturbations in the non-adversarial setting – through the lens of certificated robustness, and we demonstrate significant improvements on the generalization ability of quantized networks through our robustness-aware quantization scheme.

Downloads

Published

2020-04-03

How to Cite

Weng, T.-W., Zhao, P., Liu, S., Chen, P.-Y., Lin, X., & Daniel, L. (2020). Towards Certificated Model Robustness Against Weight Perturbations. Proceedings of the AAAI Conference on Artificial Intelligence, 34(04), 6356-6363. https://doi.org/10.1609/aaai.v34i04.6105

Issue

Section

AAAI Technical Track: Machine Learning