Solving Explainability Queries with Quantification: The Case of Feature Relevancy


  • Xuanxiang Huang University of Toulouse
  • Yacine Izza University of Toulouse National University of Singapore
  • Joao Marques-Silva IRIT, CNRS



CSO: Satisfiability, CSO: Constraint Satisfaction, CSO: Solvers and Tools, ML: Transparent, Interpretable, Explainable ML


Trustable explanations of machine learning (ML) models are vital in high-risk uses of artificial intelligence (AI). Apart from the computation of trustable explanations, a number of explainability queries have been identified and studied in recent work. Some of these queries involve solving quantification problems, either in propositional or in more expressive logics. This paper investigates one of these quantification problems, namely the feature relevancy problem (FRP), i.e.\ to decide whether a (possibly sensitive) feature can occur in some explanation of a prediction. In contrast with earlier work, that studied FRP for specific classifiers, this paper proposes a novel algorithm for the \fprob quantification problem which is applicable to any ML classifier that meets minor requirements. Furthermore, the paper shows that the novel algorithm is efficient in practice. The experimental results, obtained using random forests (RFs) induced from well-known publicly available datasets, demonstrate that the proposed solution outperforms existing state-of-the-art solvers for Quantified Boolean Formulas (QBF) by orders of magnitude. Finally, the paper also identifies a novel family of formulas that are challenging for currently state-of-the-art QBF solvers.




How to Cite

Huang, X., Izza, Y., & Marques-Silva, J. (2023). Solving Explainability Queries with Quantification: The Case of Feature Relevancy. Proceedings of the AAAI Conference on Artificial Intelligence, 37(4), 3996-4006.



AAAI Technical Track on Constraint Satisfaction and Optimization