A Neural Span-Based Continual Named Entity Recognition Model

Authors

  • Yunan Zhang Harbin Institute of Technology, Shenzhen, China
  • Qingcai Chen Harbin Institute of Technology, Shenzhen, China Peng Cheng Laboratory, Shenzhen, China

DOI:

https://doi.org/10.1609/aaai.v37i11.26638

Keywords:

SNLP: Information Extraction, ML: Lifelong and Continual Learning

Abstract

Named Entity Recognition (NER) models capable of Continual Learning (CL) are realistically valuable in areas where entity types continuously increase (e.g., personal assistants). Meanwhile the learning paradigm of NER advances to new patterns such as the span-based methods. However, its potential to CL has not been fully explored. In this paper, we propose SpanKL, a simple yet effective Span-based model with Knowledge distillation (KD) to preserve memories and multi-Label prediction to prevent conflicts in CL-NER. Unlike prior sequence labeling approaches, the inherently independent modeling in span and entity level with the designed coherent optimization on SpanKL promotes its learning at each incremental step and mitigates the forgetting. Experiments on synthetic CL datasets derived from OntoNotes and Few-NERD show that SpanKL significantly outperforms previous SoTA in many aspects, and obtains the smallest gap from CL to the upper bound revealing its high practiced value. The code is available at https://github.com/Qznan/SpanKL.

Downloads

Published

2023-06-26

How to Cite

Zhang, Y., & Chen, Q. (2023). A Neural Span-Based Continual Named Entity Recognition Model. Proceedings of the AAAI Conference on Artificial Intelligence, 37(11), 13993-14001. https://doi.org/10.1609/aaai.v37i11.26638

Issue

Section

AAAI Technical Track on Speech & Natural Language Processing