Fine-Grained Entity Typing for Domain Independent Entity Linking

Authors

  • Yasumasa Onoe The University of Texas at Austin
  • Greg Durrett The University of Texas at Austin

DOI:

https://doi.org/10.1609/aaai.v34i05.6380

Abstract

Neural entity linking models are very powerful, but run the risk of overfitting to the domain they are trained in. For this problem, a “domain” is characterized not just by genre of text but even by factors as specific as the particular distribution of entities, as neural models tend to overfit by memorizing properties of frequent entities in a dataset. We tackle the problem of building robust entity linking models that generalize effectively and do not rely on labeled entity linking data with a specific entity distribution. Rather than predicting entities directly, our approach models fine-grained entity properties, which can help disambiguate between even closely related entities. We derive a large inventory of types (tens of thousands) from Wikipedia categories, and use hyperlinked mentions in Wikipedia to distantly label data and train an entity typing model. At test time, we classify a mention with this typing model and use soft type predictions to link the mention to the most similar candidate entity. We evaluate our entity linking system on the CoNLL-YAGO dataset (Hoffart et al. 2011) and show that our approach outperforms prior domain-independent entity linking systems. We also test our approach in a harder setting derived from the WikilinksNED dataset (Eshel et al. 2017) where all the mention-entity pairs are unseen during test time. Results indicate that our approach generalizes better than a state-of-the-art neural model on the dataset.

Downloads

Published

2020-04-03

How to Cite

Onoe, Y., & Durrett, G. (2020). Fine-Grained Entity Typing for Domain Independent Entity Linking. Proceedings of the AAAI Conference on Artificial Intelligence, 34(05), 8576-8583. https://doi.org/10.1609/aaai.v34i05.6380

Issue

Section

AAAI Technical Track: Natural Language Processing