ProtoArgNet: Interpretable Image Classification with Super-Prototypes and Argumentation
DOI:
https://doi.org/10.1609/aaai.v39i2.32173Abstract
We propose ProtoArgNet, a novel interpretable deep neural architecture for image classification in the spirit of prototypical-part-learning as found, e.g., in ProtoPNet. While earlier approaches associate every class with multiple prototypical-parts, ProtoArgNet uses super-prototypes that combine prototypical-parts into a unified class representation. This is done by combining local activations of prototypes in an MLP-like manner, enabling the localization of prototypes and learning (non-linear) spatial relationships among them. By leveraging a form of argumentation, ProtoArgNet is capable of providing both supporting (i.e. `this looks like that') and attacking (i.e. `this differs from that') explanations. We demonstrate on several datasets that ProtoArgNet outperforms state-of-the-art prototypical-part-learning approaches. Moreover, the argumentation component in ProtoArgNet is customisable to the user's cognitive requirements by a process of sparsification, which leads to more compact explanations compared to state-of-the-art approaches.Downloads
Published
2025-04-11
How to Cite
Ayoobi, H., Potyka, N., & Toni, F. (2025). ProtoArgNet: Interpretable Image Classification with Super-Prototypes and Argumentation. Proceedings of the AAAI Conference on Artificial Intelligence, 39(2), 1791–1799. https://doi.org/10.1609/aaai.v39i2.32173
Issue
Section
AAAI Technical Track on Computer Vision I