Latent Relation Language Models

Authors

  • Hiroaki Hayashi Carnegie Mellon University
  • Zecong Hu Carnegie Mellon University
  • Chenyan Xiong Microsoft Research AI
  • Graham Neubig Carnegie Mellon University

DOI:

https://doi.org/10.1609/aaai.v34i05.6298

Abstract

In this paper, we propose Latent Relation Language Models (LRLMs), a class of language models that parameterizes the joint distribution over the words in a document and the entities that occur therein via knowledge graph relations. This model has a number of attractive properties: it not only improves language modeling performance, but is also able to annotate the posterior probability of entity spans for a given text through relations. Experiments demonstrate empirical improvements over both word-based language models and a previous approach that incorporates knowledge graph information. Qualitative analysis further demonstrates the proposed model's ability to learn to predict appropriate relations in context. 1

Downloads

Published

2020-04-03

How to Cite

Hayashi, H., Hu, Z., Xiong, C., & Neubig, G. (2020). Latent Relation Language Models. Proceedings of the AAAI Conference on Artificial Intelligence, 34(05), 7911-7918. https://doi.org/10.1609/aaai.v34i05.6298

Issue

Section

AAAI Technical Track: Natural Language Processing