Unfooling Perturbation-Based Post Hoc Explainers

Authors

  • Zachariah Carmichael University of Notre Dame
  • Walter J. Scheirer University of Notre Dame

DOI:

https://doi.org/10.1609/aaai.v37i6.25847

Keywords:

ML: Transparent, Interpretable, Explainable ML, DMKM: Anomaly/Outlier Detection, ML: Adversarial Learning & Robustness, ML: Bias and Fairness, PEAI: Accountability, PEAI: AI and Law, Justice, Regulation & Governance, PEAI: Bias, Fairness & Equity

Abstract

Monumental advancements in artificial intelligence (AI) have lured the interest of doctors, lenders, judges, and other professionals. While these high-stakes decision-makers are optimistic about the technology, those familiar with AI systems are wary about the lack of transparency of its decision-making processes. Perturbation-based post hoc explainers offer a model agnostic means of interpreting these systems while only requiring query-level access. However, recent work demonstrates that these explainers can be fooled adversarially. This discovery has adverse implications for auditors, regulators, and other sentinels. With this in mind, several natural questions arise - how can we audit these black box systems? And how can we ascertain that the auditee is complying with the audit in good faith? In this work, we rigorously formalize this problem and devise a defense against adversarial attacks on perturbation-based explainers. We propose algorithms for the detection (CAD-Detect) and defense (CAD-Defend) of these attacks, which are aided by our novel conditional anomaly detection approach, KNN-CAD. We demonstrate that our approach successfully detects whether a black box system adversarially conceals its decision-making process and mitigates the adversarial attack on real-world data for the prevalent explainers, LIME and SHAP. The code for this work is available at https://github.com/craymichael/unfooling.

Downloads

Published

2023-06-26

How to Cite

Carmichael, Z., & Scheirer, W. J. (2023). Unfooling Perturbation-Based Post Hoc Explainers. Proceedings of the AAAI Conference on Artificial Intelligence, 37(6), 6925-6934. https://doi.org/10.1609/aaai.v37i6.25847

Issue

Section

AAAI Technical Track on Machine Learning I