Pantypes: Diverse Representatives for Self-Explainable Models
DOI:
https://doi.org/10.1609/aaai.v38i12.29223Keywords:
ML: Transparent, Interpretable, Explainable ML, CV: Interpretability, Explainability, and Transparency, CV: Bias, Fairness & Privacy, ML: Classification and Regression, ML: Clustering, ML: Deep Learning Algorithms, ML: Dimensionality Reduction/Feature Selection, ML: Ethics, Bias, and Fairness, PEAI: Accountability, Interpretability & ExplainabilityAbstract
Prototypical self-explainable classifiers have emerged to meet the growing demand for interpretable AI systems. These classifiers are designed to incorporate high transparency in their decisions by basing inference on similarity with learned prototypical objects. While these models are designed with diversity in mind, the learned prototypes often do not sufficiently represent all aspects of the input distribution, particularly those in low density regions. Such lack of sufficient data representation, known as representation bias, has been associated with various detrimental properties related to machine learning diversity and fairness. In light of this, we introduce pantypes, a new family of prototypical objects designed to capture the full diversity of the input distribution through a sparse set of objects. We show that pantypes can empower prototypical self-explainable models by occupying divergent regions of the latent space and thus fostering high diversity, interpretability and fairness.Downloads
Published
2024-03-24
How to Cite
Kjærsgaard, R., Boubekki, A., & Clemmensen, L. (2024). Pantypes: Diverse Representatives for Self-Explainable Models. Proceedings of the AAAI Conference on Artificial Intelligence, 38(12), 13230-13237. https://doi.org/10.1609/aaai.v38i12.29223
Issue
Section
AAAI Technical Track on Machine Learning III