Unifying Multi-View Knowledge for Graph Learning via Model Collaboration

Authors

  • Zhihao Wu Zhejiang Key Laboratory of Accessible Perception and Intelligent Systems, College of Computer Science and Technology, Zhejiang University
  • Jielong Lu Zhejiang Key Laboratory of Accessible Perception and Intelligent Systems, College of Computer Science and Technology, Zhejiang University
  • Zihan Fang College of Computer and Data Science, Fuzhou University
  • Jinyu Cai Institute of Data Science, National University of Singapore
  • Guangyong Chen Hangzhou Institute of Medicine, Chinese Academy of Sciences
  • Jiajun Bu Zhejiang Key Laboratory of Accessible Perception and Intelligent Systems, College of Computer Science and Technology, Zhejiang University, Hangzhou, China
  • Haishuai Wang Zhejiang Key Laboratory of Accessible Perception and Intelligent Systems, College of Computer Science and Technology, Zhejiang University, Hangzhou, China

DOI:

https://doi.org/10.1609/aaai.v40i32.39914

Abstract

With the increasing scale and complexity of graph data, node attributes are also becoming richer and more complex, particularly in the form of informative text. Classic GNNs equipped with shallow attribute encoders are no longer sufficient to handle such data independently, making model collaboration across heterogeneous architectures an inevitable trend. Recently, the integration of Large Language Models (LLMs) and GNNs has attracted significant attention, yet the inherent disparity between these models remains a key challenge. Promising solutions have considered fine-tuning Small Language Models (SLMs) to bridge the gap between GNNs and frozen LLMs. However, this introduces another problem: these heterogeneous models bring complementary knowledge, but how to effectively integrate them and allow mutual refinement becomes a significant research gap. To address these challenges, we introduce COLA, a collaborative large–small model framework that enables seamless cooperation among semantic LLMs, task-specific fine-tuned SLMs, and structure-aware GNNs. COLA features a unique Consensus–Complement Coordination Mechanism (C3M), wherein its Mixture-of-Coordinators (MoC) architecturally aligns the LLM and SLM. Built upon this, a flexible graph-knowledge infusion strategy encourages the joint alignment and graph knowledge learning of textual representations. Extensive evaluations across nine diverse datasets show that COLA consistently achieves state-of-the-art performance, validating the effectiveness and generality of our collaborative paradigm.

Downloads

Published

2026-03-14

How to Cite

Wu, Z., Lu, J., Fang, Z., Cai, J., Chen, G., Bu, J., & Wang, H. (2026). Unifying Multi-View Knowledge for Graph Learning via Model Collaboration. Proceedings of the AAAI Conference on Artificial Intelligence, 40(32), 27010–27018. https://doi.org/10.1609/aaai.v40i32.39914

Issue

Section

AAAI Technical Track on Machine Learning IX