Contrastive Triple Extraction with Generative Transformer

Authors

  • Hongbin Ye Zhejiang University AZFT Joint Lab for Knowledge Engine
  • Ningyu Zhang Zhejiang University AZFT Joint Lab for Knowledge Engine
  • Shumin Deng Zhejiang University AZFT Joint Lab for Knowledge Engine
  • Mosha Chen Alibaba Group
  • Chuanqi Tan Alibaba Group
  • Fei Huang Alibaba Group
  • Huajun Chen Zhejiang University AZFT Joint Lab for Knowledge Engine

Keywords:

Information Extraction

Abstract

Triple extraction is an essential task in information extraction for natural language processing and knowledge graph construction. In this paper, we revisit the end-to-end triple extraction task for sequence generation. Since generative triple extraction may struggle to capture long-term dependencies and generate unfaithful triples, we introduce a novel model, contrastive triple extraction with a generative transformer. Specifically, we introduce a single shared transformer module for encoder-decoder-based generation. To generate faithful results, we propose a novel triplet contrastive training object. Moreover, we introduce two mechanisms to further improve model performance (i.e., batch-wise dynamic attention-masking and triple-wise calibration). Experimental results on three datasets (i.e., NYT, WebNLG, and MIE) show that our approach achieves better performance than that of baselines.

Downloads

Published

2021-05-18

How to Cite

Ye, H., Zhang, N., Deng, S., Chen, M., Tan, C., Huang, F., & Chen, H. (2021). Contrastive Triple Extraction with Generative Transformer. Proceedings of the AAAI Conference on Artificial Intelligence, 35(16), 14257-14265. Retrieved from https://ojs.aaai.org/index.php/AAAI/article/view/17677

Issue

Section

AAAI Technical Track on Speech and Natural Language Processing III