DiAPR: Dimensionally-Allocated Prototype Refinement for Non-Exemplar Class Incremental Learning
DOI:
https://doi.org/10.1609/aaai.v40i25.39263Abstract
Non-Exemplar Class Incremental Learning (NECIL) strives to preserve classification performance in an evolving data stream without revisiting old-class exemplars. Current methods mitigate catastrophic forgetting by replaying and augmenting historical prototypes as surrogates for old classes. However, they treat prototypes as holistic representations for global-level augmentations, which overlook dimensional semantic disparity and old-new class relationships, failing to maintain old-class discriminability and adaptability to the evolving feature space. To address this challenge, we propose Dimensionally-Allocated Prototype Refinement (DiAPR), a granular framework that progressively refines prototypes to exhibit class separability in the new feature space through three modules. Specifically, Distribution-aware Pairing (DAP) captures old-new class semantic consistency to guide Granular Semantic Allocation (GSA) in dimension-wise conflation, while Cross-Dimensional Transition (CDT) enhances cross-dimensional dependencies. The resulting prototypes sharpen classifier decision boundaries. Moreover, CDT inherently enables softened feature alignment, thereby yielding a more compatible feature space. Extensive experiments demonstrate DiAPR’s superiority, with improvements over SOTA by 2.35%, 0.70%, 0.96% on three CIFAR-100 settings, 1.03%, 0.54%, 0.40% on Tiny-ImageNet, and 0.60% on ImageNet-Subset.Published
2026-03-14
How to Cite
Gao, R., Zhao, Q., & Fu, K. (2026). DiAPR: Dimensionally-Allocated Prototype Refinement for Non-Exemplar Class Incremental Learning. Proceedings of the AAAI Conference on Artificial Intelligence, 40(25), 21189–21197. https://doi.org/10.1609/aaai.v40i25.39263
Issue
Section
AAAI Technical Track on Machine Learning II