Non-exemplar Online Class-Incremental Continual Learning via Dual-Prototype Self-Augment and Refinement
DOI:
https://doi.org/10.1609/aaai.v38i11.29165Keywords:
ML: Life-Long and Continual Learning, ML: ApplicationsAbstract
This paper investigates a new, practical, but challenging problem named Non-exemplar Online Class-incremental continual Learning (NO-CL), which aims to preserve the discernibility of base classes without buffering data examples and efficiently learn novel classes continuously in a single-pass (i.e., online) data stream. The challenges of this task are mainly two-fold: (1) Both base and novel classes suffer from severe catastrophic forgetting as no previous samples are available for replay. (2) As the online data can only be observed once, there is no way to fully re-train the whole model, e.g., re-calibrate the decision boundaries via prototype alignment or feature distillation. In this paper, we propose a novel Dual-prototype Self-augment and Refinement method (DSR) for NO-CL problem, which consists of two strategies: 1) Dual class prototypes: vanilla and high-dimensional prototypes are exploited to utilize the pre-trained information and obtain robust quasi-orthogonal representations rather than example buffers for both privacy preservation and memory reduction. 2) Self-augment and refinement: Instead of updating the whole network, we optimize high-dimensional prototypes alternatively with the extra projection module based on self-augment vanilla prototypes, through a bi-level optimization problem. Extensive experiments demonstrate the effectiveness and superiority of the proposed DSR in NO-CL.Downloads
Published
2024-03-24
How to Cite
Huo, F., Xu, W., Guo, J., Wang, H., & Fan, Y. (2024). Non-exemplar Online Class-Incremental Continual Learning via Dual-Prototype Self-Augment and Refinement. Proceedings of the AAAI Conference on Artificial Intelligence, 38(11), 12698-12707. https://doi.org/10.1609/aaai.v38i11.29165
Issue
Section
AAAI Technical Track on Machine Learning II