ReGCL: Rethinking Message Passing in Graph Contrastive Learning

Authors

  • Cheng Ji Beijing Advanced Innovation Center for Big Data and Brain Computing, Beihang University, China School of Computer Science and Engineering, Beihang University, China
  • Zixuan Huang School of Computer Science and Engineering, Beihang University, China
  • Qingyun Sun Beijing Advanced Innovation Center for Big Data and Brain Computing, Beihang University, China School of Computer Science and Engineering, Beihang University, China
  • Hao Peng Beijing Advanced Innovation Center for Big Data and Brain Computing, Beihang University, China School of Computer Science and Engineering, Beihang University, China
  • Xingcheng Fu Key Lab of Education Blockchain and Intelligent Technology, Ministry of Education, Guangxi Normal University, China
  • Qian Li Beijing Advanced Innovation Center for Big Data and Brain Computing, Beihang University, China School of Computer Science and Engineering, Beihang University, China
  • Jianxin Li Beijing Advanced Innovation Center for Big Data and Brain Computing, Beihang University, China School of Computer Science and Engineering, Beihang University, China

DOI:

https://doi.org/10.1609/aaai.v38i8.28698

Keywords:

DMKM: Graph Mining, Social Network Analysis & Community

Abstract

Graph contrastive learning (GCL) has demonstrated remarkable efficacy in graph representation learning. However, previous studies have overlooked the inherent conflict that arises when employing graph neural networks (GNNs) as encoders for node-level contrastive learning. This conflict pertains to the partial incongruity between the feature aggregation mechanism of graph neural networks and the embedding distinction characteristic of contrastive learning. Theoretically, to investigate the location and extent of the conflict, we analyze the participation of message-passing from the gradient perspective of InfoNCE loss. Different from contrastive learning in other domains, the conflict in GCL arises due to the presence of certain samples that contribute to both the gradients of positive and negative simultaneously under the manner of message passing, which are opposite optimization directions. To further address the conflict issue, we propose a practical framework called ReGCL, which utilizes theoretical findings of GCL gradients to effectively improve graph contrastive learning. Specifically, two gradient-based strategies are devised in terms of both message passing and loss function to mitigate the conflict. Firstly, a gradient-guided structure learning method is proposed in order to acquire a structure that is adapted to contrastive learning principles. Secondly, a gradient-weighted InfoNCE loss function is designed to reduce the impact of false negative samples with high probabilities, specifically from the standpoint of the graph encoder. Extensive experiments demonstrate the superiority of the proposed method in comparison to state-of-the-art baselines across various node classification benchmarks.

Published

2024-03-24

How to Cite

Ji, C., Huang, Z., Sun, Q., Peng, H., Fu, X., Li, Q., & Li, J. (2024). ReGCL: Rethinking Message Passing in Graph Contrastive Learning. Proceedings of the AAAI Conference on Artificial Intelligence, 38(8), 8544-8552. https://doi.org/10.1609/aaai.v38i8.28698

Issue

Section

AAAI Technical Track on Data Mining & Knowledge Management