Characterizing the Evasion Attackability of Multi-label Classifiers

Authors

  • Zhuo Yang King Abdullah University of Science and Technology, Thuwal, Saudi Arabia
  • Yufei Han Norton Research Group, Sophia Antipolis, France
  • Xiangliang Zhang King Abdullah University of Science and Technology, Thuwal, Saudi Arabia

DOI:

https://doi.org/10.1609/aaai.v35i12.17273

Keywords:

Adversarial Learning & Robustness, Multi-class/Multi-label Learning & Extreme Classification

Abstract

Evasion attack in multi-label learning systems is an interesting, widely witnessed, yet rarely explored research topic. Characterizing the crucial factors determining the attackability of the multi-label adversarial threat is the key to interpret the origin of the adversarial vulnerability and to understand how to mitigate it. Our study is inspired by the theory of adversarial risk bound. We associate the attackability of a targeted multi-label classifier with the regularity of the classifier and the training data distribution. Beyond the theoretical attackability analysis, we further propose an efficient empirical attackability estimator via greedy label space exploration. It provides provably computational efficiency and approximation accuracy. Substantial experimental results on real-world datasets validate the unveiled attackability factors and the effectiveness of the proposed empirical attackability indicator.

Downloads

Published

2021-05-18

How to Cite

Yang, Z., Han, Y., & Zhang, X. (2021). Characterizing the Evasion Attackability of Multi-label Classifiers. Proceedings of the AAAI Conference on Artificial Intelligence, 35(12), 10647-10655. https://doi.org/10.1609/aaai.v35i12.17273

Issue

Section

AAAI Technical Track on Machine Learning V