Contrastive Self-supervised Learning for Graph Classification
Keywords:Unsupervised & Self-Supervised Learning
AbstractGraph classification is a widely studied problem and has broad applications. In many real-world problems, the number of labeled graphs available for training classification models is limited, which renders these models prone to overfitting. To address this problem, we propose two approaches based on contrastive self-supervised learning (CSSL) to alleviate overfitting. In the first approach, we use CSSL to pretrain graph encoders on widely-available unlabeled graphs without relying on human-provided labels, then finetune the pretrained encoders on labeled graphs. In the second approach, we develop a regularizer based on CSSL, and solve the supervised classification task and the unsupervised CSSL task simultaneously. To perform CSSL on graphs, given a collection of original graphs, we perform data augmentation to create augmented graphs out of the original graphs. An augmented graph is created by consecutively applying a sequence of graph alteration operations. A contrastive loss is defined to learn graph encoders by judging whether two augmented graphs are from the same original graph. Experiments on various graph classification datasets demonstrate the effectiveness of our proposed methods. The code is available at https://github.com/UCSD-AI4H/GraphSSL.
How to Cite
Zeng, J., & Xie, P. (2021). Contrastive Self-supervised Learning for Graph Classification. Proceedings of the AAAI Conference on Artificial Intelligence, 35(12), 10824-10832. Retrieved from https://ojs.aaai.org/index.php/AAAI/article/view/17293
AAAI Technical Track on Machine Learning V