AutoGCL: Automated Graph Contrastive Learning via Learnable View Generators

Authors

  • Yihang Yin Nanyang Technological University
  • Qingzhong Wang Baidu Research
  • Siyu Huang Harvard University
  • Haoyi Xiong Baidu Research
  • Xiang Zhang The Pennsylvania State University

DOI:

https://doi.org/10.1609/aaai.v36i8.20871

Keywords:

Machine Learning (ML)

Abstract

Contrastive learning has been widely applied to graph representation learning, where the view generators play a vital role in generating effective contrastive samples. Most of the existing contrastive learning methods employ pre-defined view generation methods, e.g., node drop or edge perturbation, which usually cannot adapt to input data or preserve the original semantic structures well. To address this issue, we propose a novel framework named Automated Graph Contrastive Learning (AutoGCL) in this paper. Specifically, AutoGCL employs a set of learnable graph view generators orchestrated by an auto augmentation strategy, where every graph view generator learns a probability distribution of graphs conditioned by the input. While the graph view generators in AutoGCL preserve the most representative structures of the original graph in generation of every contrastive sample, the auto augmentation learns policies to introduce adequate augmentation variances in the whole contrastive learning procedure. Furthermore, AutoGCL adopts a joint training strategy to train the learnable view generators, the graph encoder, and the classifier in an end-to-end manner, resulting in topological heterogeneity yet semantic similarity in the generation of contrastive samples. Extensive experiments on semi-supervised learning, unsupervised learning, and transfer learning demonstrate the superiority of our AutoGCL framework over the state-of-the-arts in graph contrastive learning. In addition, the visualization results further confirm that the learnable view generators can deliver more compact and semantically meaningful contrastive samples compared against the existing view generation methods. Our code is available at https://github.com/Somedaywilldo/AutoGCL.

Downloads

Published

2022-06-28

How to Cite

Yin, Y., Wang, Q., Huang, S., Xiong, H., & Zhang, X. (2022). AutoGCL: Automated Graph Contrastive Learning via Learnable View Generators. Proceedings of the AAAI Conference on Artificial Intelligence, 36(8), 8892-8900. https://doi.org/10.1609/aaai.v36i8.20871

Issue

Section

AAAI Technical Track on Machine Learning III