A General Theoretical Framework for Learning Smallest Interpretable Models
DOI:
https://doi.org/10.1609/aaai.v38i9.28937Keywords:
KRR: Computational Complexity of Reasoning, ML: Transparent, Interpretable, Explainable MLAbstract
We develop a general algorithmic framework that allows us to obtain fixed-parameter tractability for computing smallest symbolic models that represent given data. Our framework applies to all ML model types that admit a certain extension property. By showing this extension property for decision trees, decision sets, decision lists, and binary decision diagrams, we obtain that minimizing these fundamental model types is fixed-parameter tractable. Our framework even applies to ensembles, which combine individual models by majority decision.Downloads
Published
2024-03-24
How to Cite
Ordyniak, S., Paesani, G., Rychlicki, M., & Szeider, S. (2024). A General Theoretical Framework for Learning Smallest Interpretable Models. Proceedings of the AAAI Conference on Artificial Intelligence, 38(9), 10662-10669. https://doi.org/10.1609/aaai.v38i9.28937
Issue
Section
AAAI Technical Track on Knowledge Representation and Reasoning