Handling Model Uncertainty and Multiplicity in Explanations via Model Reconciliation
DOI:
https://doi.org/10.1609/icaps.v28i1.13930Keywords:
Explanation, Model UncertainityAbstract
Model reconciliation has been proposed as a way for an agent to explain its decisions to a human who may have a different understanding of the same planning problem by explaining its decisions in terms of these model differences.However, often the human's mental model (and hence the difference) is not known precisely and such explanations cannot be readily computed.In this paper, we show how the explanation generation process evolves in the presence of such model uncertainty or incompleteness by generating {\em conformant explanations} that are applicable to a set of possible models.We also show how such explanations can contain superfluous informationand how such redundancies can be reduced using conditional explanations to iterate with the human to attain common ground. Finally, we will introduce an anytime version of this approach and empirically demonstrate the trade-offs involved in the different forms of explanations in terms of the computational overhead for the agent and the communication overhead for the human.We illustrate these concepts in three well-known planning domains as well as in a demonstration on a robot involved in a typical search and reconnaissance scenario with an external human supervisor.