Towards Automating Model Explanations with Certified Robustness Guarantees
DOI:
https://doi.org/10.1609/aaai.v36i6.20651Keywords:
Machine Learning (ML)Abstract
Providing model explanations has gained significant popularity recently. In contrast with the traditional feature-level model explanations, concept-based explanations can provide explanations in the form of high-level human concepts. However, existing concept-based explanation methods implicitly follow a two-step procedure that involves human intervention. Specifically, they first need the human to be involved to define (or extract) the high-level concepts, and then manually compute the importance scores of these identified concepts in a post-hoc way. This laborious process requires significant human effort and resource expenditure due to manual work, which hinders their large-scale deployability. In practice, it is challenging to automatically generate the concept-based explanations without human intervention due to the subjectivity of defining the units of concept-based interpretability. In addition, due to its data-driven nature, the interpretability itself is also potentially susceptible to malicious manipulations. Hence, our goal in this paper is to free human from this tedious process, while ensuring that the generated explanations are provably robust to adversarial perturbations. We propose a novel concept-based interpretation method, which can not only automatically provide the prototype-based concept explanations but also provide certified robustness guarantees for the generated prototype-based explanations. We also conduct extensive experiments on real-world datasets to verify the desirable properties of the proposed method.Downloads
Published
2022-06-28
How to Cite
Huai, M., Liu, J., Miao, C., Yao, L., & Zhang, A. (2022). Towards Automating Model Explanations with Certified Robustness Guarantees. Proceedings of the AAAI Conference on Artificial Intelligence, 36(6), 6935-6943. https://doi.org/10.1609/aaai.v36i6.20651
Issue
Section
AAAI Technical Track on Machine Learning I