Re-TACRED: Addressing Shortcomings of the TACRED Dataset

Authors

  • George Stoica Carnegie Mellon University
  • Emmanouil Antonios Platanios Microsoft Semantic Machines
  • Barnabas Poczos Carnegie Mellon University

DOI:

https://doi.org/10.1609/aaai.v35i15.17631

Keywords:

Information Extraction

Abstract

TACRED is one of the largest and most widely used sentence-level relation extraction datasets. Proposed models that are evaluated using this dataset consistently set new state-of-the-art performance. However, they still exhibit large error rates despite leveraging external knowledge and unsupervised pretraining on large text corpora. A recent study suggested that this may be due to poor dataset quality. The study observed that over 50% of the most challenging sentences from the development and test sets are incorrectly labeled and account for an average drop of 8% f1-score in model performance. However, this study was limited to a small biased sample of 5k (out of a total of 106k) sentences, substantially restricting the generalizability and broader implications of its findings. In this paper, we address these shortcomings by: (i) performing a comprehensive study over the whole TACRED dataset, (ii) proposing an improved crowdsourcing strategy and deploying it to re-annotate the whole dataset, and (iii) performing a thorough analysis to understand how correcting the TACRED annotations affects previously published results. After verification, we observed that 23.9% of TACRED labels are incorrect. Moreover, evaluating several models on our revised dataset yields an average f1-score improvement of 14.3% and helps uncover significant relationships between the different models (rather than simply offsetting or scaling their scores by a constant factor). Finally, aside from our analysis we also release Re-TACRED, a new completely re-annotated version of the TACRED dataset that can be used to perform reliable evaluation of relation extraction models.

Downloads

Published

2021-05-18

How to Cite

Stoica, G., Platanios, E. A., & Poczos, B. (2021). Re-TACRED: Addressing Shortcomings of the TACRED Dataset. Proceedings of the AAAI Conference on Artificial Intelligence, 35(15), 13843-13850. https://doi.org/10.1609/aaai.v35i15.17631

Issue

Section

AAAI Technical Track on Speech and Natural Language Processing II