Argumentation for Explainable Scheduling


  • Kristijonas Čyras Imperial College London
  • Dimitrios Letsios Imperial College London
  • Ruth Misener Imperial College London
  • Francesca Toni Imperial College London



Mathematical optimization offers highly-effective tools for finding solutions for problems with well-defined goals, notably scheduling. However, optimization solvers are often unexplainable black boxes whose solutions are inaccessible to users and which users cannot interact with. We define a novel paradigm using argumentation to empower the interaction between optimization solvers and users, supported by tractable explanations which certify or refute solutions. A solution can be from a solver or of interest to a user (in the context of ‘what-if’ scenarios). Specifically, we define argumentative and natural language explanations for why a schedule is (not) feasible, (not) efficient or (not) satisfying fixed user decisions, based on models of the fundamental makespan scheduling problem in terms of abstract argumentation frameworks (AFs). We define three types of AFs, whose stable extensions are in one-to-one correspondence with schedules that are feasible, efficient and satisfying fixed decisions, respectively. We extract the argumentative explanations from these AFs and the natural language explanations from the argumentative ones.




How to Cite

Čyras, K., Letsios, D., Misener, R., & Toni, F. (2019). Argumentation for Explainable Scheduling. Proceedings of the AAAI Conference on Artificial Intelligence, 33(01), 2752-2759.



AAAI Technical Track: Knowledge Representation and Reasoning