Explainable Goal Recognition: A Framework Based on Weight of Evidence

Authors

  • Abeer Alshehri University of Melbourne, Melbourne, Australia King Khalid University, Abha, Saudi Arabia
  • Tim Miller University of Melbourne, Melbourne, Australia
  • Mor Vered Monash University, Melbourne, Australia

DOI:

https://doi.org/10.1609/icaps.v33i1.27173

Keywords:

Plan recognition, plan management, and goal reasoning

Abstract

We introduce and evaluate an eXplainable goal recognition (XGR) model that uses the Weight of Evidence (WoE) framework to explain goal recognition problems. Our model provides human-centered explanations that answer `why?' and `why not?' questions. We computationally evaluate the performance of our system over eight different goal recognition domains showing it does not significantly increase the underlying recognition run time. Using a human behavioral study to obtain the ground truth from human annotators, we further show that the XGR model can successfully generate human-like explanations. We then report on a study with 40 participants who observe agents playing a Sokoban game and then receive explanations of the goal recognition output. We investigated participants’ understanding obtained by explanations through task prediction, explanation satisfaction, and trust.

Downloads

Published

2023-07-01

How to Cite

Alshehri, A., Miller, T., & Vered, M. (2023). Explainable Goal Recognition: A Framework Based on Weight of Evidence. Proceedings of the International Conference on Automated Planning and Scheduling, 33(1), 7-16. https://doi.org/10.1609/icaps.v33i1.27173