Beyond Sharpness: A Flatness Decomposition Framework for Efficient Continual Learning
DOI:
https://doi.org/10.1609/aaai.v40i24.39124Abstract
Continual Learning (CL) aims to enable models to sequentially learn multiple tasks without forgetting previous knowledge. Recent studies have shown that optimizing towards flatter loss minima can improve model generalization. However, existing sharpness-aware methods for CL suffer from two key limitations: (1) they treat sharpness regularization as a unified signal without distinguishing the contributions of its components. and (2) they introduce substantial computational overhead that impedes practical deployment. To address these challenges, we propose FLAD, a novel optimization framework that decomposes sharpness-aware perturbations into gradient-aligned and stochastic-noise components, and show that retaining only the noise component promotes generalization. We further introduce a lightweight scheduling scheme that enables FLAD to maintain significant performance gains even under constrained training time. FLAD can be seamlessly integrated into various CL paradigms and consistently outperforms standard and sharpness-aware optimizers in diverse experimental settings, demonstrating its effectiveness and practicality in CL.Published
2026-03-14
How to Cite
Chen, Y., Gong, T., Zhang, Y., & Wen, W. (2026). Beyond Sharpness: A Flatness Decomposition Framework for Efficient Continual Learning. Proceedings of the AAAI Conference on Artificial Intelligence, 40(24), 20354-20362. https://doi.org/10.1609/aaai.v40i24.39124
Issue
Section
AAAI Technical Track on Machine Learning I