DKPLM: Decomposable Knowledge-Enhanced Pre-trained Language Model for Natural Language Understanding

Authors

  • Taolin Zhang East China Normal University
  • Chengyu Wang Alibaba Group
  • Nan Hu East China Normal University
  • Minghui Qiu Alibaba Group
  • Chengguang Tang Alibaba Group
  • Xiaofeng He East China Normal University
  • Jun Huang Alibaba Group

DOI:

https://doi.org/10.1609/aaai.v36i10.21425

Keywords:

Speech & Natural Language Processing (SNLP)

Abstract

Knowledge-Enhanced Pre-trained Language Models (KEPLMs) are pre-trained models with relation triples injecting from knowledge graphs to improve language understanding abilities.Experiments show that our model outperforms other KEPLMs significantly over zero-shot knowledge probing tasks and multiple knowledge-aware language understanding tasks. To guarantee effective knowledge injection, previous studies integrate models with knowledge encoders for representing knowledge retrieved from knowledge graphs. The operations for knowledge retrieval and encoding bring significant computational burdens, restricting the usage of such models in real-world applications that require high inference speed. In this paper, we propose a novel KEPLM named DKPLM that decomposes knowledge injection process of the pre-trained language models in pre-training, fine-tuning and inference stages, which facilitates the applications of KEPLMs in real-world scenarios. Specifically, we first detect knowledge-aware long-tail entities as the target for knowledge injection, enhancing the KEPLMs' semantic understanding abilities and avoiding injecting redundant information. The embeddings of long-tail entities are replaced by ``pseudo token representations'' formed by relevant knowledge triples. We further design the relational knowledge decoding task for pre-training to force the models to truly understand the injected knowledge by relation triple reconstruction. Experiments show that our model outperforms other KEPLMs significantly over zero-shot knowledge probing tasks and multiple knowledge-aware language understanding tasks. We further show that DKPLM has a higher inference speed than other competing models due to the decomposing mechanism.

Downloads

Published

2022-06-28

How to Cite

Zhang, T., Wang, C., Hu, N., Qiu, M., Tang, C., He, X., & Huang, J. (2022). DKPLM: Decomposable Knowledge-Enhanced Pre-trained Language Model for Natural Language Understanding. Proceedings of the AAAI Conference on Artificial Intelligence, 36(10), 11703-11711. https://doi.org/10.1609/aaai.v36i10.21425

Issue

Section

AAAI Technical Track on Speech and Natural Language Processing