Attribute and Structure Preserving Graph Contrastive Learning
DOI:
https://doi.org/10.1609/aaai.v37i6.25858Keywords:
ML: Graph-based Machine Learning, ML: Unsupervised & Self-Supervised Learning, ML: Representation Learning, ML: Deep Neural Network Algorithms, ML: Classification and RegressionAbstract
Graph Contrastive Learning (GCL) has drawn much research interest due to its strong ability to capture both graph structure and node attribute information in a self-supervised manner. Current GCL methods usually adopt Graph Neural Networks (GNNs) as the base encoder, which typically relies on the homophily assumption of networks and overlooks node similarity in the attribute space. There are many scenarios where such assumption cannot be satisfied, or node similarity plays a crucial role. In order to design a more robust mechanism, we develop a novel attribute and structure preserving graph contrastive learning framework, named ASP, which comprehensively and efficiently preserves node attributes while exploiting graph structure. Specifically, we consider three different graph views in our framework, i.e., original view, attribute view, and global structure view. Then, we perform contrastive learning across three views in a joint fashion, mining comprehensive graph information. We validate the effectiveness of the proposed framework on various real-world networks with different levels of homophily. The results demonstrate the superior performance of our model over the representative baselines.Downloads
Published
2023-06-26
How to Cite
Chen, J., & Kou, G. (2023). Attribute and Structure Preserving Graph Contrastive Learning. Proceedings of the AAAI Conference on Artificial Intelligence, 37(6), 7024-7032. https://doi.org/10.1609/aaai.v37i6.25858
Issue
Section
AAAI Technical Track on Machine Learning I