Empowering Dual-Level Graph Self-Supervised Pretraining with Motif Discovery

Authors

  • Pengwei Yan Department of Information Resources Management, Zhejiang University, Hangzhou, 310058, China Alibaba Group, Hangzhou, 311121, China
  • Kaisong Song Alibaba Group, Hangzhou, 311121, China Northeastern University, Shenyang, 110819, China
  • Zhuoren Jiang Department of Information Resources Management, Zhejiang University, Hangzhou, 310058, China
  • Yangyang Kang Alibaba Group, Hangzhou, 311121, China
  • Tianqianjin Lin Department of Information Resources Management, Zhejiang University, Hangzhou, 310058, China Alibaba Group, Hangzhou, 311121, China
  • Changlong Sun Alibaba Group, Hangzhou, 311121, China
  • Xiaozhong Liu Computer Science Department, Worcester Polytechnic Institute, Worcester, 01609-2280, MA, USA

DOI:

https://doi.org/10.1609/aaai.v38i8.28774

Keywords:

DMKM: Graph Mining, Social Network Analysis & Community, ML: Unsupervised & Self-Supervised Learning, ML: Deep Learning Algorithms, ML: Multi-instance/Multi-view Learning

Abstract

While self-supervised graph pretraining techniques have shown promising results in various domains, their application still experiences challenges of limited topology learning, human knowledge dependency, and incompetent multi-level interactions. To address these issues, we propose a novel solution, Dual-level Graph self-supervised Pretraining with Motif discovery (DGPM), which introduces a unique dual-level pretraining structure that orchestrates node-level and subgraph-level pretext tasks. Unlike prior approaches, DGPM autonomously uncovers significant graph motifs through an edge pooling module, aligning learned motif similarities with graph kernel-based similarities. A cross-matching task enables sophisticated node-motif interactions and novel representation learning. Extensive experiments on 15 datasets validate DGPM's effectiveness and generalizability, outperforming state-of-the-art methods in unsupervised representation learning and transfer learning settings. The autonomously discovered motifs demonstrate the potential of DGPM to enhance robustness and interpretability.

Published

2024-03-24

How to Cite

Yan, P., Song, K., Jiang, Z., Kang, Y., Lin, T., Sun, C., & Liu, X. (2024). Empowering Dual-Level Graph Self-Supervised Pretraining with Motif Discovery. Proceedings of the AAAI Conference on Artificial Intelligence, 38(8), 9223-9231. https://doi.org/10.1609/aaai.v38i8.28774

Issue

Section

AAAI Technical Track on Data Mining & Knowledge Management