Rethinking Graph Masked Autoencoders through Alignment and Uniformity

Authors

  • Liang Wang Institute of Automation, Chinese Academy of Sciences University of Chinese Academy of Sciences
  • Xiang Tao Institute of Automation, Chinese Academy of Sciences University of Chinese Academy of Sciences
  • Qiang Liu Institute of Automation, Chinese Academy of Sciences University of Chinese Academy of Sciences
  • Shu Wu Institute of Automation, Chinese Academy of Sciences University of Chinese Academy of Sciences
  • Liang Wang Institute of Automation, Chinese Academy of Sciences University of Chinese Academy of Sciences

DOI:

https://doi.org/10.1609/aaai.v38i14.29479

Keywords:

ML: Graph-based Machine Learning, DMKM: Graph Mining, Social Network Analysis & Community, ML: Unsupervised & Self-Supervised Learning

Abstract

Self-supervised learning on graphs can be bifurcated into contrastive and generative methods. Contrastive methods, also known as graph contrastive learning (GCL), have dominated graph self-supervised learning in the past few years, but the recent advent of graph masked autoencoder (GraphMAE) rekindles the momentum behind generative methods. Despite the empirical success of GraphMAE, there is still a dearth of theoretical understanding regarding its efficacy. Moreover, while both generative and contrastive methods have been shown to be effective, their connections and differences have yet to be thoroughly investigated. Therefore, we theoretically build a bridge between GraphMAE and GCL, and prove that the node-level reconstruction objective in GraphMAE implicitly performs context-level GCL. Based on our theoretical analysis, we further identify the limitations of the GraphMAE from the perspectives of alignment and uniformity, which have been considered as two key properties of high-quality representations in GCL. We point out that GraphMAE's alignment performance is restricted by the masking strategy, and the uniformity is not strictly guaranteed. To remedy the aforementioned limitations, we propose an Alignment-Uniformity enhanced Graph Masked AutoEncoder, named AUG-MAE. Specifically, we propose an easy-to-hard adversarial masking strategy to provide hard-to-align samples, which improves the alignment performance. Meanwhile, we introduce an explicit uniformity regularizer to ensure the uniformity of the learned representations. Experimental results on benchmark datasets demonstrate the superiority of our model over existing state-of-the-art methods. The code is available at: https://github.com/AzureLeon1/AUG-MAE.

Published

2024-03-24

How to Cite

Wang, L., Tao, X., Liu, Q., Wu, S., & Wang, L. (2024). Rethinking Graph Masked Autoencoders through Alignment and Uniformity. Proceedings of the AAAI Conference on Artificial Intelligence, 38(14), 15528-15536. https://doi.org/10.1609/aaai.v38i14.29479

Issue

Section

AAAI Technical Track on Machine Learning V