GNN-Transformer Cooperative Architecture for Trustworthy Graph Contrastive Learning

Authors

  • Jianqing Liang Shanxi University
  • Xinkai Wei Shanxi University
  • Min Chen Shanxi University
  • Zhiqiang Wang Shanxi University
  • Jiye Liang Shanxi University

DOI:

https://doi.org/10.1609/aaai.v39i18.34054

Abstract

Graph contrastive learning (GCL) has become a hot topic in the field of graph representaion learning. In contrast to traditional supervised learning relying on a large number of labels, GCL exploits augmentation techniques to generate multiple views and positive/negative pairs, both of which greatly influence the performance. Unfortunately, commonly used random augmentations may disturb the underlying semantics of graphs. Moreover, traditional GNNs, a type of widely employed encoders in GCL, are inevitably confronted with over-smoothing and over-squashing problems. To address these issues, we propose GNN-Transformer Cooperative Architecture for Trustworthy Graph Contrastive Learning (GTCA), which inherits the advantages of both GNN and Transformer, incorporating graph topology to obtain comprehensive graph representations. Theoretical analysis verifies the trustworthiness of the proposed method. Extensive experiments on benchmark datasets demonstrate state-of-the-art empirical performance.

Downloads

Published

2025-04-11

How to Cite

Liang, J., Wei, X., Chen, M., Wang, Z., & Liang, J. (2025). GNN-Transformer Cooperative Architecture for Trustworthy Graph Contrastive Learning. Proceedings of the AAAI Conference on Artificial Intelligence, 39(18), 18667–18675. https://doi.org/10.1609/aaai.v39i18.34054

Issue

Section

AAAI Technical Track on Machine Learning IV