Visual Pivoting for (Unsupervised) Entity Alignment
Keywords:Linked Open Data, Knowledge Graphs & KB Completio, Language Grounding & Multi-modal NLP
AbstractThis work studies the use of visual semantic representations to align entities in heterogeneous knowledge graphs (KGs). Images are natural components of many existing KGs. By combining visual knowledge with other auxiliary information, we show that the proposed new approach, EVA, creates a holistic entity representation that provides strong signals for cross-graph entity alignment. Besides, previous entity alignment methods require human labelled seed alignment, restricting availability. EVA provides a completely unsupervised solution by leveraging the visual similarity of entities to create an initial seed dictionary (visual pivots). Experiments on benchmark data sets DBP15k and DWY15k show that EVA offers state-of-the-art performance on both monolingual and cross-lingual entity alignment tasks. Furthermore, we discover that images are particularly useful to align long-tail KG entities, which inherently lack the structural contexts necessary for capturing the correspondences. Code release: https://github.com/cambridgeltl/eva; project page: http://cogcomp.org/page/publication view/927.
How to Cite
Liu, F., Chen, M., Roth, D., & Collier, N. (2021). Visual Pivoting for (Unsupervised) Entity Alignment. Proceedings of the AAAI Conference on Artificial Intelligence, 35(5), 4257-4266. Retrieved from https://ojs.aaai.org/index.php/AAAI/article/view/16550
AAAI Technical Track on Data Mining and Knowledge Management