Adversarial Robustness through Disentangled Representations

Authors

  • Shuo Yang University of Sydney
  • Tianyu Guo Peking University
  • Yunhe Wang Huawei Noah's Ark Lab
  • Chang Xu University of Sydney

Keywords:

Adversarial Attacks & Robustness

Abstract

Despite the remarkable empirical performance of deep learning models, their vulnerability to adversarial examples has been revealed in many studies. They are prone to make a susceptible prediction to the input with imperceptible adversarial perturbation. Although recent works have remarkably improved the model's robustness under the adversarial training strategy, an evident gap between the natural accuracy and adversarial robustness inevitably exists. In order to mitigate this problem, in this paper, we assume that the robust and non-robust representations are two basic ingredients entangled in the integral representation. For achieving adversarial robustness, the robust representations of natural and adversarial examples should be disentangled from the non-robust part and the alignment of the robust representations can bridge the gap between accuracy and robustness. Inspired by this motivation, we propose a novel defense method called Deep Robust Representation Disentanglement Network (DRRDN). Specifically, DRRDN employs a disentangler to extract and align the robust representations from both adversarial and natural examples. Theoretical analysis guarantees the mitigation of the trade-off between robustness and accuracy with good disentanglement and alignment performance. Experimental results on benchmark datasets finally demonstrate the empirical superiority of our method.

Downloads

Published

2021-05-18

How to Cite

Yang, S., Guo, T., Wang, Y., & Xu, C. (2021). Adversarial Robustness through Disentangled Representations. Proceedings of the AAAI Conference on Artificial Intelligence, 35(4), 3145-3153. Retrieved from https://ojs.aaai.org/index.php/AAAI/article/view/16424

Issue

Section

AAAI Technical Track on Computer Vision III