Beyond Technocratic XAI: The Who, What & How in Explanation Design

Authors

  • Ruchira Dhar University of Copenhagen
  • Stephanie Brandl University of Copenhagen
  • Ninell Oldenburg University of Copenhagen
  • Anders Søgaard University of Copenhagen

DOI:

https://doi.org/10.1609/aies.v8i1.36586

Abstract

The field of Explainable AI (XAI) offers a wide range of techniques for making complex models interpretable. Yet, in practice, generating meaningful explanations is a context-dependent task that requires intentional design choices to ensure accessibility and transparency. This paper reframes explanation as a situated design process—an approach particularly relevant for practitioners involved in building and deploying explainable systems. Drawing on prior research and principles from design thinking, we propose a three-part framework for explanation design in XAI: asking Who needs the explanation, What they need explained, and How that explanation should be delivered. We also emphasize the need for ethical considerations, including risks of epistemic inequality, reinforcing social inequities, and obscuring accountability and governance. By treating explanation as a sociotechnical design process, this framework encourages a context-aware approach to XAI that supports effective communication and the development of ethically responsible explanations.

Downloads

Published

2025-10-15

How to Cite

Dhar, R., Brandl, S., Oldenburg, N., & Søgaard, A. (2025). Beyond Technocratic XAI: The Who, What & How in Explanation Design. Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, 8(1), 745-759. https://doi.org/10.1609/aies.v8i1.36586