Learning to Walk with Dual Agents for Knowledge Graph Reasoning


  • Denghui Zhang Rutgers University
  • Zixuan Yuan Rutgers University
  • Hao Liu HKUST
  • Xiaodong lin Rutgers University
  • Hui Xiong Rutgers University




Knowledge Representation And Reasoning (KRR), Data Mining & Knowledge Management (DMKM)


Graph walking based on reinforcement learning (RL) has shown great success in navigating an agent to automatically complete various reasoning tasks over an incomplete knowledge graph (KG) by exploring multi-hop relational paths. However, existing multi-hop reasoning approaches only work well on short reasoning paths and tend to miss the target entity with the increasing path length. This is undesirable for many reasoning tasks in real-world scenarios, where short paths connecting the source and target entities are not available in incomplete KGs, and thus the reasoning performances drop drastically unless the agent is able to seek out more clues from longer paths. To address the above challenge, in this paper, we propose a dual-agent reinforcement learning framework, which trains two agents (Giant and Dwarf) to walk over a KG jointly and search for the answer collaboratively. Our approach tackles the reasoning challenge in long paths by assigning one of the agents (Giant) searching on cluster-level paths quickly and providing stage-wise hints for another agent (Dwarf). Finally, experimental results on several KG reasoning benchmarks show that our approach can search answers more accurately and efficiently, and outperforms existing RL-based methods for long path queries by a large margin.




How to Cite

Zhang, D., Yuan, Z., Liu, H., lin, X., & Xiong, H. (2022). Learning to Walk with Dual Agents for Knowledge Graph Reasoning. Proceedings of the AAAI Conference on Artificial Intelligence, 36(5), 5932-5941. https://doi.org/10.1609/aaai.v36i5.20538



AAAI Technical Track on Knowledge Representation and Reasoning