Improving Adversarial Robustness via Probabilistically Compact Loss with Logit Constraints

Authors

  • Xin Li Wayne State Unversity, USA
  • Xiangrui Li Wayne State Unversity, USA
  • Deng Pan Wayne State Unversity, USA
  • Dongxiao Zhu Wayne State Unversity, USA

DOI:

https://doi.org/10.1609/aaai.v35i10.17030

Keywords:

Adversarial Learning & Robustness

Abstract

Convolutional neural networks (CNNs) have achieved state-of-the-art performance on various tasks in computer vision. However, recent studies demonstrate that these models are vulnerable to carefully crafted adversarial samples and suffer from a significant performance drop when predicting them. Many methods have been proposed to improve adversarial robustness (e.g., adversarial training and new loss functions to learn adversarially robust feature representations). Here we offer a unique insight into the predictive behavior of CNNs that they tend to misclassify adversarial samples into the most probable false classes. This inspires us to propose a new Probabilistically Compact (PC) loss with logit constraints which can be used as a drop-in replacement for cross-entropy (CE) loss to improve CNN's adversarial robustness. Specifically, PC loss enlarges the probability gaps between true class and false classes meanwhile the logit constraints prevent the gaps from being melted by a small perturbation. We extensively compare our method with the state-of-the-art using large scale datasets under both white-box and black-box attacks to demonstrate its effectiveness. The source codes are available at https://github.com/xinli0928/PC-LC.

Downloads

Published

2021-05-18

How to Cite

Li, X., Li, X., Pan, D., & Zhu, D. (2021). Improving Adversarial Robustness via Probabilistically Compact Loss with Logit Constraints. Proceedings of the AAAI Conference on Artificial Intelligence, 35(10), 8482-8490. https://doi.org/10.1609/aaai.v35i10.17030

Issue

Section

AAAI Technical Track on Machine Learning III