Abduction-Based Explanations for Machine Learning Models


  • Alexey Ignatiev University of Lisbon
  • Nina Narodytska VMware Research
  • Joao Marques-Silva ISDCT SB RAS




The growing range of applications of Machine Learning (ML) in a multitude of settings motivates the ability of computing small explanations for predictions made. Small explanations are generally accepted as easier for human decision makers to understand. Most earlier work on computing explanations is based on heuristic approaches, providing no guarantees of quality, in terms of how close such solutions are from cardinality- or subset-minimal explanations. This paper develops a constraint-agnostic solution for computing explanations for any ML model. The proposed solution exploits abductive reasoning, and imposes the requirement that the ML model can be represented as sets of constraints using some target constraint reasoning system for which the decision problem can be answered with some oracle. The experimental results, obtained on well-known datasets, validate the scalability of the proposed approach as well as the quality of the computed solutions.




How to Cite

Ignatiev, A., Narodytska, N., & Marques-Silva, J. (2019). Abduction-Based Explanations for Machine Learning Models. Proceedings of the AAAI Conference on Artificial Intelligence, 33(01), 1511-1519. https://doi.org/10.1609/aaai.v33i01.33011511



AAAI Technical Track: Constraint Satisfaction and Optimization