Computing Probabilistic Explanations for ML Models: Fixed-Parameter Algorithms

Authors

  • Sebastian Ordyniak University of Leeds
  • Mateusz Rychlicki University of Leeds
  • Stefan Szeider TU Wien

DOI:

https://doi.org/10.1609/aaai.v40i29.39646

Abstract

Machine learning models now drive many critical decisions, making explanations of their reasoning essential. Recent work analyzes the complexity of exact explanations in transparent models, but these explanations are often too large for practical use. This has motivated research into probabilistic alternatives. We study probabilistic extensions that allow controlled uncertainty while maintaining rigorous foundations. We analyze three basic model types: decision trees, decision lists, and decision sets. We introduce algorithms for computing both local and global probabilistic explanations for these models. Our main result shows that computing minimum-size probabilistic explanations is fixed-parameter tractable when parameterized by structural properties---specifically, the number of terms for decision lists and decision sets and the minimum of the number of positive and the number of negative leaves.

Downloads

Published

2026-03-14

How to Cite

Ordyniak, S., Rychlicki, M., & Szeider, S. (2026). Computing Probabilistic Explanations for ML Models: Fixed-Parameter Algorithms. Proceedings of the AAAI Conference on Artificial Intelligence, 40(29), 24622-24629. https://doi.org/10.1609/aaai.v40i29.39646

Issue

Section

AAAI Technical Track on Machine Learning VI