Contrastive Triple Extraction with Generative Transformer
DOI:
https://doi.org/10.1609/aaai.v35i16.17677Keywords:
Information ExtractionAbstract
Triple extraction is an essential task in information extraction for natural language processing and knowledge graph construction. In this paper, we revisit the end-to-end triple extraction task for sequence generation. Since generative triple extraction may struggle to capture long-term dependencies and generate unfaithful triples, we introduce a novel model, contrastive triple extraction with a generative transformer. Specifically, we introduce a single shared transformer module for encoder-decoder-based generation. To generate faithful results, we propose a novel triplet contrastive training object. Moreover, we introduce two mechanisms to further improve model performance (i.e., batch-wise dynamic attention-masking and triple-wise calibration). Experimental results on three datasets (i.e., NYT, WebNLG, and MIE) show that our approach achieves better performance than that of baselines.Downloads
Published
2021-05-18
How to Cite
Ye, H., Zhang, N., Deng, S., Chen, M., Tan, C., Huang, F., & Chen, H. (2021). Contrastive Triple Extraction with Generative Transformer. Proceedings of the AAAI Conference on Artificial Intelligence, 35(16), 14257-14265. https://doi.org/10.1609/aaai.v35i16.17677
Issue
Section
AAAI Technical Track on Speech and Natural Language Processing III