Expert-Informed, User-Centric Explanations for Machine Learning

Authors

  • Michael Pazzani UCSD
  • Severine Soltani UCSD
  • Robert Kaufman UCSD
  • Samson Qian UCSD
  • Albert Hsiao UCSD

DOI:

https://doi.org/10.1609/aaai.v36i11.21491

Keywords:

Machine Learning, Explainable AI, Cognitive Science, Ethnography

Abstract

We argue that the dominant approach to explainable AI for explaining image classification, annotating images with heatmaps, provides little value for users unfamiliar with deep learning. We argue that explainable AI for images should produce output like experts produce when communicating with one another, with apprentices, and with novices. We provide an expanded set of goals of explainable AI systems and propose a Turing Test for explainable AI.

Downloads

Published

2022-06-28

How to Cite

Pazzani, M., Soltani, S., Kaufman, R., Qian, S., & Hsiao, A. (2022). Expert-Informed, User-Centric Explanations for Machine Learning. Proceedings of the AAAI Conference on Artificial Intelligence, 36(11), 12280-12286. https://doi.org/10.1609/aaai.v36i11.21491