CIP-Net: Continual Interpretable Prototype-based Network

Authors

  • Federico Di Valerio Sapienza University of Rome, Italy
  • Michela Proietti Sapienza University of Rome, Italy
  • Alessio Ragno Institut National des Sciences Appliquées de Lyon EPITA Research Laboratory (LRE)
  • Roberto Capobianco Sony AI

DOI:

https://doi.org/10.1609/aaai.v40i25.39216

Abstract

Continual learning constrains models to learn new tasks over time without forgetting what they have already learned. A key challenge in this setting is catastrophic forgetting, where learning new information causes the model to lose its performance on previous tasks. Recently, explainable AI has been proposed as a promising way to better understand and reduce forgetting. In particular, self-explainable models are useful because they generate explanations during prediction, which can help preserve knowledge. However, most existing explainable approaches use post-hoc explanations or require additional memory for each new task, resulting in limited scalability. In this work, we introduce CIP-Net, an exemplar-free self-explainable prototype-based model designed for continual learning. CIP-Net avoids storing past examples and maintains a simple architecture, while still providing useful explanations and strong performance. We demonstrate that CIP-Net achieves state-of-the-art performances compared to previous exemplar-free and self-explainable methods in both task- and class-incremental settings, while bearing significantly lower memory-related overhead. This makes it a practical and interpretable solution for continual learning.

Published

2026-03-14

How to Cite

Di Valerio, F., Proietti, M., Ragno, A., & Capobianco, R. (2026). CIP-Net: Continual Interpretable Prototype-based Network. Proceedings of the AAAI Conference on Artificial Intelligence, 40(25), 20772–20780. https://doi.org/10.1609/aaai.v40i25.39216

Issue

Section

AAAI Technical Track on Machine Learning II