Composite Adversarial Attacks
Keywords:Adversarial Learning & Robustness, Adversarial Attacks & Robustness
AbstractAdversarial attack is a technique for deceiving Machine Learning (ML) models, which provides a way to evaluate the adversarial robustness. In practice, attack algorithms are artificially selected and tuned by human experts to break a ML system. However, manual selection of attackers tends to be sub-optimal, leading to a mistakenly assessment of model security. In this paper, a new procedure called Composite Adversarial Attack (CAA) is proposed for automatically searching the best combination of attack algorithms and their hyper-parameters from a candidate pool of 32 base attackers. We design a search space where attack policy is represented as an attacking sequence, i.e., the output of the previous attacker is used as the initialization input for successors. Multi-objective NSGA-II genetic algorithm is adopted for finding the strongest attack policy with minimum complexity. The experimental result shows CAA beats 10 top attackers on 11 diverse defenses with less elapsed time (6 × faster than AutoAttack), and achieves the new state-of-the-art on linf, l2 and unrestricted adversarial attacks.
How to Cite
Mao, X., Chen, Y., Wang, S., Su, H., He, Y., & Xue, H. (2021). Composite Adversarial Attacks. Proceedings of the AAAI Conference on Artificial Intelligence, 35(10), 8884-8892. Retrieved from https://ojs.aaai.org/index.php/AAAI/article/view/17075
AAAI Technical Track on Machine Learning III