ProKD: An Unsupervised Prototypical Knowledge Distillation Network for Zero-Resource Cross-Lingual Named Entity Recognition

Authors

  • Ling Ge Beihang University
  • Chunming Hu Beihang University
  • Guanghui Ma Beihang University
  • Hong Zhang National Computer Network Emergency Response Technical Team / Coordination Center of China
  • Jihong Liu Beihang University

DOI:

https://doi.org/10.1609/aaai.v37i11.26507

Keywords:

SNLP: Applications, SNLP: Information Extraction

Abstract

For named entity recognition (NER) in zero-resource languages, utilizing knowledge distillation methods to transfer language-independent knowledge from the rich-resource source languages to zero-resource languages is an effective means. Typically, these approaches adopt a teacher-student architecture, where the teacher network is trained in the source language, and the student network seeks to learn knowledge from the teacher network and is expected to perform well in the target language. Despite the impressive performance achieved by these methods, we argue that they have two limitations. Firstly, the teacher network fails to effectively learn language-independent knowledge shared across languages due to the differences in the feature distribution between the source and target languages. Secondly, the student network acquires all of its knowledge from the teacher network and ignores the learning of target language-specific knowledge. Undesirably, these limitations would hinder the model's performance in the target language. This paper proposes an unsupervised prototype knowledge distillation network (ProKD) to address these issues. Specifically, ProKD presents a contrastive learning-based prototype alignment method to achieve class feature alignment by adjusting the prototypes' distance from the source and target languages, boosting the teacher network's capacity to acquire language-independent knowledge. In addition, ProKD introduces a prototype self-training method to learn the intrinsic structure of the language by retraining the student network on the target data using samples' distance information from prototypes, thereby enhancing the student network's ability to acquire language-specific knowledge. Extensive experiments on three benchmark cross-lingual NER datasets demonstrate the effectiveness of our approach.

Downloads

Published

2023-06-26

How to Cite

Ge, L., Hu, C., Ma, G., Zhang, H., & Liu, J. (2023). ProKD: An Unsupervised Prototypical Knowledge Distillation Network for Zero-Resource Cross-Lingual Named Entity Recognition. Proceedings of the AAAI Conference on Artificial Intelligence, 37(11), 12818-12826. https://doi.org/10.1609/aaai.v37i11.26507

Issue

Section

AAAI Technical Track on Speech & Natural Language Processing