Tractable Explanations for d-DNNF Classifiers


  • Xuanxiang Huang University of Toulouse
  • Yacine Izza University of Toulouse
  • Alexey Ignatiev Faculty of IT, Monash University
  • Martin Cooper University of Toulouse 3
  • Nicholas Asher CNRS
  • Joao Marques-Silva IRIT, CNRS



Knowledge Representation And Reasoning (KRR)


Compilation into propositional languages finds a growing number of practical uses, including in constraint programming, diagnosis and machine learning (ML), among others. One concrete example is the use of propositional languages as classifiers, and one natural question is how to explain the predictions made. This paper shows that for classifiers represented with some of the best-known propositional languages, different kinds of explanations can be computed in polynomial time. These languages include deterministic decomposable negation normal form (d-DNNF), and so any propositional language that is strictly less succinct than d-DNNF. Furthermore, the paper describes optimizations, specific to Sentential Decision Diagrams (SDDs), which are shown to yield more efficient algorithms in practice.




How to Cite

Huang, X., Izza, Y., Ignatiev, A., Cooper, M., Asher, N., & Marques-Silva, J. (2022). Tractable Explanations for d-DNNF Classifiers. Proceedings of the AAAI Conference on Artificial Intelligence, 36(5), 5719-5728.



AAAI Technical Track on Knowledge Representation and Reasoning