Modeling Knowledge Graphs with Composite Reasoning

Authors

  • Wanyun Cui Shanghai University of Finance and Economics
  • Linqiu Zhang Shanghai University of Finance and Economics

DOI:

https://doi.org/10.1609/aaai.v38i8.28675

Keywords:

DMKM: Linked Open Data, Knowledge Graphs & KB Completio

Abstract

The ability to combine multiple pieces of existing knowledge to infer new knowledge is both crucial and challenging. In this paper, we explore how facts of various entities are combined in the context of knowledge graph completion (KGC). We use composite reasoning to unify the views from different KGC models, including translational models, tensor factorization (TF)-based models, instance-based learning models, and KGC regularizers. Moreover, our comprehensive examination of composite reasoning revealed an unexpected phenomenon: certain TF-based models learn embeddings with erroneous composite reasoning, which ultimately violates their fundamental collaborative filtering assumption and reduces their effects. This motivates us to reduce their composition error. Empirical evaluations demonstrate that mitigating the composition risk not only enhances the performance of TF-based models across all tested settings, but also surpass or is competitive with the state-of-the-art performance on two out of four benchmarks.

Published

2024-03-24

How to Cite

Cui, W., & Zhang, L. (2024). Modeling Knowledge Graphs with Composite Reasoning. Proceedings of the AAAI Conference on Artificial Intelligence, 38(8), 8338-8345. https://doi.org/10.1609/aaai.v38i8.28675

Issue

Section

AAAI Technical Track on Data Mining & Knowledge Management