Adaptive Prompting for Continual Relation Extraction: A Within-Task Variance Perspective
DOI:
https://doi.org/10.1609/aaai.v39i23.34616Abstract
To address catastrophic forgetting in Continual Relation Extraction (CRE), many current approaches rely on memory buffers to rehearse previously learned knowledge while acquiring new tasks. Recently, prompt-based methods have emerged as potent alternatives to rehearsal-based strategies, demonstrating strong empirical performance. However, upon analyzing existing prompt-based approaches for CRE, we identified several critical limitations, such as inaccurate prompt selection, inadequate mechanisms for mitigating forgetting in shared parameters, and suboptimal handling of cross-task and within-task variances. To overcome these challenges, we draw inspiration from the relationship between prefix tuning and mixture of experts, proposing a novel approach that employs a prompt pool for each task, capturing variations within each task while enhancing cross-task variances. Furthermore, we incorporate a generative model to consolidate prior knowledge within shared parameters, eliminating the need for explicit data storage. Extensive experiments validate the efficacy of our approach, demonstrating superior performance over state-of-the-art prompt-based and rehearsal-free methods in continual relation extraction.Downloads
Published
2025-04-11
How to Cite
Le, M., Luu, T. N., The, A. N., Le, T.-T., Nguyen, T., Nguyen, T. T., Van, L. N., & Nguyen, T. H. (2025). Adaptive Prompting for Continual Relation Extraction: A Within-Task Variance Perspective. Proceedings of the AAAI Conference on Artificial Intelligence, 39(23), 24384-24392. https://doi.org/10.1609/aaai.v39i23.34616
Issue
Section
AAAI Technical Track on Natural Language Processing II