Handling Model Uncertainty and Multiplicity in Explanations via Model Reconciliation

Authors

  • Sarath Sreedharan Arizona State University
  • Tathagata Chakraborti Arizona State University
  • Subbarao Kambhampati Arizona State University

DOI:

https://doi.org/10.1609/icaps.v28i1.13930

Keywords:

Explanation, Model Uncertainity

Abstract

Model reconciliation has been proposed as a way for an agent to explain its decisions to a human who may have a different understanding of the same planning problem by explaining its decisions in terms of these model differences.However, often the human's mental model (and hence the difference) is not known precisely and such explanations cannot be readily computed.In this paper, we show how the explanation generation process evolves in the presence of such model uncertainty or incompleteness by generating {\em conformant explanations} that are applicable to a set of possible models.We also show how such explanations can contain superfluous informationand how such redundancies can be reduced using conditional explanations to iterate with the human to attain common ground. Finally, we will introduce an anytime version of this approach and empirically demonstrate the trade-offs involved in the different forms of explanations in terms of the computational overhead for the agent and the communication overhead for the human.We illustrate these concepts in three well-known planning domains as well as in a demonstration on a robot involved in a typical search and reconnaissance scenario with an external human supervisor.

Downloads

Published

2018-06-15

How to Cite

Sreedharan, S., Chakraborti, T., & Kambhampati, S. (2018). Handling Model Uncertainty and Multiplicity in Explanations via Model Reconciliation. Proceedings of the International Conference on Automated Planning and Scheduling, 28(1), 518-526. https://doi.org/10.1609/icaps.v28i1.13930