TY - JOUR AU - Yin, Yihang AU - Wang, Qingzhong AU - Huang, Siyu AU - Xiong, Haoyi AU - Zhang, Xiang PY - 2022/06/28 Y2 - 2024/03/28 TI - AutoGCL: Automated Graph Contrastive Learning via Learnable View Generators JF - Proceedings of the AAAI Conference on Artificial Intelligence JA - AAAI VL - 36 IS - 8 SE - AAAI Technical Track on Machine Learning III DO - 10.1609/aaai.v36i8.20871 UR - https://ojs.aaai.org/index.php/AAAI/article/view/20871 SP - 8892-8900 AB - Contrastive learning has been widely applied to graph representation learning, where the view generators play a vital role in generating effective contrastive samples. Most of the existing contrastive learning methods employ pre-defined view generation methods, e.g., node drop or edge perturbation, which usually cannot adapt to input data or preserve the original semantic structures well. To address this issue, we propose a novel framework named Automated Graph Contrastive Learning (AutoGCL) in this paper. Specifically, AutoGCL employs a set of learnable graph view generators orchestrated by an auto augmentation strategy, where every graph view generator learns a probability distribution of graphs conditioned by the input. While the graph view generators in AutoGCL preserve the most representative structures of the original graph in generation of every contrastive sample, the auto augmentation learns policies to introduce adequate augmentation variances in the whole contrastive learning procedure. Furthermore, AutoGCL adopts a joint training strategy to train the learnable view generators, the graph encoder, and the classifier in an end-to-end manner, resulting in topological heterogeneity yet semantic similarity in the generation of contrastive samples. Extensive experiments on semi-supervised learning, unsupervised learning, and transfer learning demonstrate the superiority of our AutoGCL framework over the state-of-the-arts in graph contrastive learning. In addition, the visualization results further confirm that the learnable view generators can deliver more compact and semantically meaningful contrastive samples compared against the existing view generation methods. Our code is available at https://github.com/Somedaywilldo/AutoGCL. ER -