AttackBench: Evaluating Gradient-based Attacks for Adversarial Examples

Authors

  • Antonio Emanuele Cinà University of Genoa, Italy
  • Jérôme Rony École de Technologie Supérieure
  • Maura Pintor University of Cagliari, Italy
  • Luca Demetrio University of Genoa, Italy
  • Ambra Demontis University of Cagliari, Italy
  • Battista Biggio University of Cagliari, Italy
  • Ismail Ben Ayed École de Technologie Supérieure
  • Fabio Roli University of Genova, Italy

DOI:

https://doi.org/10.1609/aaai.v39i3.32263

Abstract

While novel gradient-based attacks are continuously proposed to improve the optimization of adversarial examples, each is shown to outperform its predecessors using different experimental setups, implementations, and computational budgets, leading to biased and unfair comparisons. In this work, we overcome this issue by proposing AttackBench, i.e., an attack evaluation framework that evaluates the effectiveness of each attack (along with its different library implementations) under the same maximum available computational budget. To this end, we (i) define a novel optimality metric that quantifies how close each attack is to the optimal solution (empirically estimated by ensembling all attacks), and (ii) limit the maximum number of forward and backward queries that each attack can execute on the target model. Our extensive experimental analysis compares more than 100 attack implementations over 800 different configurations, considering both CIFAR-10 and ImageNet models, and shows that only a few attack implementations outperform all the remaining approaches. These findings suggest that novel defenses should be evaluated against different attacks than those normally used in the literature to avoid overly-optimistic robustness evaluations. We release AttackBench as a publicly-available benchmark that will be continuously updated with new attack implementations to maintain an up-to-date ranking of the best gradient-based attacks. We release AttackBench as a publicly available benchmark, including a continuously updated leaderboard and source code to maintain an up-to-date ranking of the best gradient-based attacks.

Downloads

Published

2025-04-11

How to Cite

Cinà, A. E., Rony, J., Pintor, M., Demetrio, L., Demontis, A., Biggio, B., … Roli, F. (2025). AttackBench: Evaluating Gradient-based Attacks for Adversarial Examples. Proceedings of the AAAI Conference on Artificial Intelligence, 39(3), 2600–2608. https://doi.org/10.1609/aaai.v39i3.32263

Issue

Section

AAAI Technical Track on Computer Vision II