MA-GCL: Model Augmentation Tricks for Graph Contrastive Learning
DOI:
https://doi.org/10.1609/aaai.v37i4.25547Keywords:
DMKM: Graph Mining, Social Network Analysis & Community Mining, ML: Graph-based Machine LearningAbstract
Contrastive learning (CL), which can extract the information shared between different contrastive views, has become a popular paradigm for vision representation learning. Inspired by the success in computer vision, recent work introduces CL into graph modeling, dubbed as graph contrastive learning (GCL). However, generating contrastive views in graphs is more challenging than that in images, since we have little prior knowledge on how to significantly augment a graph without changing its labels. We argue that typical data augmentation techniques (e.g., edge dropping) in GCL cannot generate diverse enough contrastive views to filter out noises. Moreover, previous GCL methods employ two view encoders with exactly the same neural architecture and tied parameters, which further harms the diversity of augmented views. To address this limitation, we propose a novel paradigm named model augmented GCL (MA-GCL), which will focus on manipulating the architectures of view encoders instead of perturbing graph inputs. Specifically, we present three easy-to-implement model augmentation tricks for GCL, namely asymmetric, random and shuffling, which can respectively help alleviate high-frequency noises, enrich training instances and bring safer augmentations. All three tricks are compatible with typical data augmentations. Experimental results show that MA-GCL can achieve state-of-the-art performance on node classification benchmarks by applying the three tricks on a simple base model. Extensive studies also validate our motivation and the effectiveness of each trick. (Code, data and appendix are available at https://github.com/GXM1141/MA-GCL. )Downloads
Published
2023-06-26
How to Cite
Gong, X., Yang, C., & Shi, C. (2023). MA-GCL: Model Augmentation Tricks for Graph Contrastive Learning. Proceedings of the AAAI Conference on Artificial Intelligence, 37(4), 4284-4292. https://doi.org/10.1609/aaai.v37i4.25547
Issue
Section
AAAI Technical Track on Data Mining and Knowledge Management