Contrastive Self-supervised Learning for Graph Classification

Authors

  • Jiaqi Zeng Shanghai Jiao Tong University
  • Pengtao Xie University of California San Diego

DOI:

https://doi.org/10.1609/aaai.v35i12.17293

Keywords:

Unsupervised & Self-Supervised Learning

Abstract

Graph classification is a widely studied problem and has broad applications. In many real-world problems, the number of labeled graphs available for training classification models is limited, which renders these models prone to overfitting. To address this problem, we propose two approaches based on contrastive self-supervised learning (CSSL) to alleviate overfitting. In the first approach, we use CSSL to pretrain graph encoders on widely-available unlabeled graphs without relying on human-provided labels, then finetune the pretrained encoders on labeled graphs. In the second approach, we develop a regularizer based on CSSL, and solve the supervised classification task and the unsupervised CSSL task simultaneously. To perform CSSL on graphs, given a collection of original graphs, we perform data augmentation to create augmented graphs out of the original graphs. An augmented graph is created by consecutively applying a sequence of graph alteration operations. A contrastive loss is defined to learn graph encoders by judging whether two augmented graphs are from the same original graph. Experiments on various graph classification datasets demonstrate the effectiveness of our proposed methods. The code is available at https://github.com/UCSD-AI4H/GraphSSL.

Downloads

Published

2021-05-18

How to Cite

Zeng, J., & Xie, P. (2021). Contrastive Self-supervised Learning for Graph Classification. Proceedings of the AAAI Conference on Artificial Intelligence, 35(12), 10824-10832. https://doi.org/10.1609/aaai.v35i12.17293

Issue

Section

AAAI Technical Track on Machine Learning V