Teaching Humans When to Defer to a Classifier via Exemplars

Authors

  • Hussein Mozannar Massachusetts Institute of Technology
  • Arvind Satyanarayan Massachusetts Institute of Technology
  • David Sontag Massachusetts Institute of Technology

DOI:

https://doi.org/10.1609/aaai.v36i5.20469

Keywords:

Humans And AI (HAI), Machine Learning (ML), Cognitive Modeling & Cognitive Systems (CMS)

Abstract

Expert decision makers are starting to rely on data-driven automated agents to assist them with various tasks. For this collaboration to perform properly, the human decision maker must have a mental model of when and when not to rely on the agent. In this work, we aim to ensure that human decision makers learn a valid mental model of the agent's strengths and weaknesses. To accomplish this goal, we propose an exemplar-based teaching strategy where humans solve a set of selected examples and with our help generalize from them to the domain. We present a novel parameterization of the human's mental model of the AI that applies a nearest neighbor rule in local regions surrounding the teaching examples. Using this model, we derive a near-optimal strategy for selecting a representative teaching set. We validate the benefits of our teaching strategy on a multi-hop question answering task with an interpretable AI model using crowd workers. We find that when workers draw the right lessons from the teaching stage, their task performance improves. We furthermore validate our method on a set of synthetic experiments.

Downloads

Published

2022-06-28

How to Cite

Mozannar, H., Satyanarayan, A., & Sontag, D. (2022). Teaching Humans When to Defer to a Classifier via Exemplars. Proceedings of the AAAI Conference on Artificial Intelligence, 36(5), 5323-5331. https://doi.org/10.1609/aaai.v36i5.20469

Issue

Section

AAAI Technical Track on Humans and AI