PID-Based Approach to Adversarial Attacks

Authors

  • Chen Wan School of Computer Science and Engineering, Sun Yat-Sen University, Guangzhou 510006, China Guangdong Provincial Key Laboratory of Information Security Technology, Guangzhou 510006, China
  • Biaohua Ye School of Computer Science and Engineering, Sun Yat-Sen University, Guangzhou 510006, China Guangdong Provincial Key Laboratory of Information Security Technology, Guangzhou 510006, China
  • Fangjun Huang School of Computer Science and Engineering, Sun Yat-Sen University, Guangzhou 510006, China Guangdong Provincial Key Laboratory of Information Security Technology, Guangzhou 510006, China

DOI:

https://doi.org/10.1609/aaai.v35i11.17204

Keywords:

Adversarial Learning & Robustness, Adversarial Attacks & Robustness, Security

Abstract

Adversarial attack can misguide the deep neural networks (DNNs) with adding small-magnitude perturbations to normal examples, which is mainly determined by the gradient of the loss function with respect to inputs. Previously, various strategies have been proposed to enhance the performance of adversarial attacks. However, all these methods only utilize the gradients in the present and past to generate adversarial examples. Until now, the trend of gradient change in the future (i.e., the derivative of gradient) has not been considered yet. Inspired by the classic proportional-integral-derivative (PID) controller in the field of automatic control, we propose a new PID-based approach for generating adversarial examples. The gradients in the present and past, and the derivative of gradient are considered in our method, which correspond to the components of P, I and D in the PID controller, respectively. Extensive experiments consistently demonstrate that our method can achieve higher attack success rates and exhibit better transferability compared with the state-of-the-art gradient-based adversarial attacks. Furthermore, our method possesses good extensibility and can be applied to almost all available gradient-based adversarial attacks.

Downloads

Published

2021-05-18

How to Cite

Wan, C., Ye, B., & Huang, F. . (2021). PID-Based Approach to Adversarial Attacks. Proceedings of the AAAI Conference on Artificial Intelligence, 35(11), 10033-10040. https://doi.org/10.1609/aaai.v35i11.17204

Issue

Section

AAAI Technical Track on Machine Learning IV