A Frank-Wolfe Framework for Efficient and Effective Adversarial Attacks

Authors

  • Jinghui Chen University of California, Los Angeles
  • Dongruo Zhou University of California, Los Angeles
  • Jinfeng Yi JD AI Research
  • Quanquan Gu University of California, Los Angeles

DOI:

https://doi.org/10.1609/aaai.v34i04.5753

Abstract

Depending on how much information an adversary can access to, adversarial attacks can be classified as white-box attack and black-box attack. For white-box attack, optimization-based attack algorithms such as projected gradient descent (PGD) can achieve relatively high attack success rates within moderate iterates. However, they tend to generate adversarial examples near or upon the boundary of the perturbation set, resulting in large distortion. Furthermore, their corresponding black-box attack algorithms also suffer from high query complexities, thereby limiting their practical usefulness. In this paper, we focus on the problem of developing efficient and effective optimization-based adversarial attack algorithms. In particular, we propose a novel adversarial attack framework for both white-box and black-box settings based on a variant of Frank-Wolfe algorithm. We show in theory that the proposed attack algorithms are efficient with an O(1/√T) convergence rate. The empirical results of attacking the ImageNet and MNIST datasets also verify the efficiency and effectiveness of the proposed algorithms. More specifically, our proposed algorithms attain the best attack performances in both white-box and black-box attacks among all baselines, and are more time and query efficient than the state-of-the-art.

Downloads

Published

2020-04-03

How to Cite

Chen, J., Zhou, D., Yi, J., & Gu, Q. (2020). A Frank-Wolfe Framework for Efficient and Effective Adversarial Attacks. Proceedings of the AAAI Conference on Artificial Intelligence, 34(04), 3486-3494. https://doi.org/10.1609/aaai.v34i04.5753

Issue

Section

AAAI Technical Track: Machine Learning