Adversarial Training and Provable Robustness: A Tale of Two Objectives
DOI:
https://doi.org/10.1609/aaai.v35i8.16904Keywords:
Adversarial Learning & Robustness, OptimizationAbstract
We propose a principled framework that combines adversarial training and provable robustness verification for training certifiably robust neural networks. We formulate the training problem as a joint optimization problem with both empirical and provable robustness objectives and develop a novel gradient-descent technique that can eliminate bias in stochastic multi-gradients. We perform both theoretical analysis on the convergence of the proposed technique and experimental comparison with state-of-the-arts. Results on MNIST and CIFAR-10 show that our method can consistently match or outperform prior approaches for provable l∞ robustness. Notably, we achieve 6.60% verified test error on MNIST at ε = 0.3, and 66.57% on CIFAR-10 with ε = 8/255.Downloads
Published
2021-05-18
How to Cite
Fan, J., & Li, W. (2021). Adversarial Training and Provable Robustness: A Tale of Two Objectives. Proceedings of the AAAI Conference on Artificial Intelligence, 35(8), 7367-7376. https://doi.org/10.1609/aaai.v35i8.16904
Issue
Section
AAAI Technical Track on Machine Learning I