Visual Pivoting for (Unsupervised) Entity Alignment

Authors

  • Fangyu Liu University of Cambridge
  • Muhao Chen University of Southern California, University of Pennsylvania
  • Dan Roth University of Pennsylvania
  • Nigel Collier University of Cambridge

DOI:

https://doi.org/10.1609/aaai.v35i5.16550

Keywords:

Linked Open Data, Knowledge Graphs & KB Completio, Language Grounding & Multi-modal NLP

Abstract

This work studies the use of visual semantic representations to align entities in heterogeneous knowledge graphs (KGs). Images are natural components of many existing KGs. By combining visual knowledge with other auxiliary information, we show that the proposed new approach, EVA, creates a holistic entity representation that provides strong signals for cross-graph entity alignment. Besides, previous entity alignment methods require human labelled seed alignment, restricting availability. EVA provides a completely unsupervised solution by leveraging the visual similarity of entities to create an initial seed dictionary (visual pivots). Experiments on benchmark data sets DBP15k and DWY15k show that EVA offers state-of-the-art performance on both monolingual and cross-lingual entity alignment tasks. Furthermore, we discover that images are particularly useful to align long-tail KG entities, which inherently lack the structural contexts necessary for capturing the correspondences. Code release: https://github.com/cambridgeltl/eva; project page: http://cogcomp.org/page/publication view/927.

Downloads

Published

2021-05-18

How to Cite

Liu, F., Chen, M., Roth, D., & Collier, N. (2021). Visual Pivoting for (Unsupervised) Entity Alignment. Proceedings of the AAAI Conference on Artificial Intelligence, 35(5), 4257-4266. https://doi.org/10.1609/aaai.v35i5.16550

Issue

Section

AAAI Technical Track on Data Mining and Knowledge Management