AUC Maximization for Low-Resource Named Entity Recognition

Authors

  • Ngoc Dang Nguyen Monash University
  • Wei Tan Monash University
  • Lan Du Monash University
  • Wray Buntine VinUniversity
  • RIchard Beare Monash University
  • Changyou Chen University at Buffalo

DOI:

https://doi.org/10.1609/aaai.v37i11.26571

Keywords:

SNLP: Syntax -- Tagging, Chunking & Parsing, SNLP: Learning & Optimization for SNLP

Abstract

Current work in named entity recognition (NER) uses either cross entropy (CE) or conditional random fields (CRF) as the objective/loss functions to optimize the underlying NER model. Both of these traditional objective functions for the NER problem generally produce adequate performance when the data distribution is balanced and there are sufficient annotated training examples. But since NER is inherently an imbalanced tagging problem, the model performance under the low-resource settings could suffer using these standard objective functions. Based on recent advances in area under the ROC curve (AUC) maximization, we propose to optimize the NER model by maximizing the AUC score. We give evidence that by simply combining two binary-classifiers that maximize the AUC score, significant performance improvement over traditional loss functions is achieved under low-resource NER settings. We also conduct extensive experiments to demonstrate the advantages of our method under the low-resource and highly-imbalanced data distribution settings. To the best of our knowledge, this is the first work that brings AUC maximization to the NER setting. Furthermore, we show that our method is agnostic to different types of NER embeddings, models and domains. The code of this work is available at https://github.com/dngu0061/NER-AUC-2T.

Downloads

Published

2023-06-26

How to Cite

Nguyen, N. D., Tan, W., Du, L., Buntine, W., Beare, R., & Chen, C. (2023). AUC Maximization for Low-Resource Named Entity Recognition. Proceedings of the AAAI Conference on Artificial Intelligence, 37(11), 13389-13399. https://doi.org/10.1609/aaai.v37i11.26571

Issue

Section

AAAI Technical Track on Speech & Natural Language Processing