K-BERT: Enabling Language Representation with Knowledge Graph

Authors

  • Weijie Liu Peking University
  • Peng Zhou Tencent
  • Zhe Zhao Tencent
  • Zhiruo Wang Beijing Normal University
  • Qi Ju Tencent
  • Haotang Deng Tencent
  • Ping Wang Tencent

DOI:

https://doi.org/10.1609/aaai.v34i03.5681

Abstract

Pre-trained language representation models, such as BERT, capture a general language representation from large-scale corpora, but lack domain-specific knowledge. When reading a domain text, experts make inferences with relevant knowledge. For machines to achieve this capability, we propose a knowledge-enabled language representation model (K-BERT) with knowledge graphs (KGs), in which triples are injected into the sentences as domain knowledge. However, too much knowledge incorporation may divert the sentence from its correct meaning, which is called knowledge noise (KN) issue. To overcome KN, K-BERT introduces soft-position and visible matrix to limit the impact of knowledge. K-BERT can easily inject domain knowledge into the models by being equipped with a KG without pre-training by itself because it is capable of loading model parameters from the pre-trained BERT. Our investigation reveals promising results in twelve NLP tasks. Especially in domain-specific tasks (including finance, law, and medicine), K-BERT significantly outperforms BERT, which demonstrates that K-BERT is an excellent choice for solving the knowledge-driven problems that require experts.

Downloads

Published

2020-04-03

How to Cite

Liu, W., Zhou, P., Zhao, Z., Wang, Z., Ju, Q., Deng, H., & Wang, P. (2020). K-BERT: Enabling Language Representation with Knowledge Graph. Proceedings of the AAAI Conference on Artificial Intelligence, 34(03), 2901-2908. https://doi.org/10.1609/aaai.v34i03.5681

Issue

Section

AAAI Technical Track: Knowledge Representation and Reasoning