Fastened CROWN: Tightened Neural Network Robustness Certificates

Authors

  • Zhaoyang Lyu The Chinese University of Hong Kong
  • Ching-Yun Ko MIT
  • Zhifeng Kong University of California San Diego
  • Ngai Wong The University of Hong Kong
  • Dahua Lin The Chinese University of Hong Kong
  • Luca Daniel Massachusetts Institute of Technology

DOI:

https://doi.org/10.1609/aaai.v34i04.5944

Abstract

The rapid growth of deep learning applications in real life is accompanied by severe safety concerns. To mitigate this uneasy phenomenon, much research has been done providing reliable evaluations of the fragility level in different deep neural networks. Apart from devising adversarial attacks, quantifiers that certify safeguarded regions have also been designed in the past five years. The summarizing work in (Salman et al. 2019) unifies a family of existing verifiers under a convex relaxation framework. We draw inspiration from such work and further demonstrate the optimality of deterministic CROWN (Zhang et al. 2018) solutions in a given linear programming problem under mild constraints. Given this theoretical result, the computationally expensive linear programming based method is shown to be unnecessary. We then propose an optimization-based approach FROWN (Fastened CROWN): a general algorithm to tighten robustness certificates for neural networks. Extensive experiments on various networks trained individually verify the effectiveness of FROWN in safeguarding larger robust regions.

Downloads

Published

2020-04-03

How to Cite

Lyu, Z., Ko, C.-Y., Kong, Z., Wong, N., Lin, D., & Daniel, L. (2020). Fastened CROWN: Tightened Neural Network Robustness Certificates. Proceedings of the AAAI Conference on Artificial Intelligence, 34(04), 5037-5044. https://doi.org/10.1609/aaai.v34i04.5944

Issue

Section

AAAI Technical Track: Machine Learning