LADA-Trans-NER: Adaptive Efficient Transformer for Chinese Named Entity Recognition Using Lexicon-Attention and Data-Augmentation

Authors

  • Jiguo Liu Institute of Information Engineering, Chinese Academy of Sciences
  • Chao Liu Institute of Information Engineering, Chinese Academy of Sciences School of Cyber Security, University of Chinese Academy of Sciences
  • Nan Li Institute of Information Engineering, Chinese Academy of Sciences School of Cyber Security, University of Chinese Academy of Sciences
  • Shihao Gao Institute of Information Engineering, Chinese Academy of Sciences School of Cyber Security, University of Chinese Academy of Sciences
  • Mingqi Liu Institute of Information Engineering, Chinese Academy of Sciences
  • Dali Zhu Institute of Information Engineering, Chinese Academy of Sciences School of Cyber Security, University of Chinese Academy of Sciences

DOI:

https://doi.org/10.1609/aaai.v37i11.26554

Keywords:

SNLP: Information Extraction, KRR: Knowledge Acquisition, KRR: Knowledge Engineering, KRR: Knowledge Representation Languages, ML: Multi-Class/Multi-Label Learning & Extreme Classification, SNLP: Applications, SNLP: Interpretability & Analysis of NLP Models, SNLP: Lexical & Frame Semantics, Semantic Parsing, SNLP: Ontology Induction From Text, SNLP: Text Classification, SNLP: Text Mining

Abstract

Recently, word enhancement has become very popular for Chinese Named Entity Recognition (NER), reducing segmentation errors and increasing the semantic and boundary information of Chinese words. However, these methods tend to ignore the semantic relationship before and after the sentence after integrating lexical information. Therefore, the regularity of word length information has not been fully explored in various word-character fusion methods. In this work, we propose a Lexicon-Attention and Data-Augmentation (LADA) method for Chinese NER. We discuss the challenges of using existing methods in incorporating word information for NER and show how our proposed methods could be leveraged to overcome those challenges. LADA is based on a Transformer Encoder that utilizes lexicon to construct a directed graph and fuses word information through updating the optimal edge of the graph. Specially, we introduce the advanced data augmentation method to obtain the optimal representation for the NER task. Experimental results show that the augmentation done using LADA can considerably boost the performance of our NER system and achieve significantly better results than previous state-of-the-art methods and variant models in the literature on four publicly available NER datasets, namely Resume, MSRA, Weibo, and OntoNotes v4. We also observe better generalization and application to a real-world setting from LADA on multi-source complex entities.

Downloads

Published

2023-06-26

How to Cite

Liu, J., Liu, C., Li, N., Gao, S., Liu, M., & Zhu, D. (2023). LADA-Trans-NER: Adaptive Efficient Transformer for Chinese Named Entity Recognition Using Lexicon-Attention and Data-Augmentation. Proceedings of the AAAI Conference on Artificial Intelligence, 37(11), 13236-13245. https://doi.org/10.1609/aaai.v37i11.26554

Issue

Section

AAAI Technical Track on Speech & Natural Language Processing