Towards Linking Local and Global Explanations for AI Assessments with Concept Explanation Clusters

Authors

  • Elena Haedecke University of Bonn, Germany Fraunhofer IAIS, Sankt Augustin, Germany
  • Maram Akila Fraunhofer IAIS, Sankt Augustin, Germany Lamarr Institute, Sankt Augustin, Germany
  • Laura von Rueden Fraunhofer IAIS, Sankt Augustin, Germany Hochschule für Technik Stuttgart, Germany

DOI:

https://doi.org/10.1609/aaaiss.v4i1.31779

Abstract

Understanding the inner workings of artificial intelligence (AI) systems is important both in the light of regulation (e.g., the EU AI Act), but also to uncover hidden weaknesses. Although local and global explanation methods can support this, a scalable and human-centered combination is required to combine the detail of the former with the latter's efficiency. Therefore, we present our method concept explanation clusters as a step towards explaining (sub-)strategies of the model through human-understandable concepts by identifying clusters in the input data while accounting for model predictions by local explanations. In this way, all the benefits of local explanations can be retained while allowing contextualisation on a larger (i.e., data-global) scale.

Downloads

Published

2024-11-08

How to Cite

Haedecke, E., Akila, M., & von Rueden, L. (2024). Towards Linking Local and Global Explanations for AI Assessments with Concept Explanation Clusters. Proceedings of the AAAI Symposium Series, 4(1), 106–109. https://doi.org/10.1609/aaaiss.v4i1.31779

Issue

Section

AI Trustworthiness and Risk Assessment for Challenging Contexts (ATRACC) - Short Papers