Rethinking Graph Regularization for Graph Neural Networks

Authors

  • Han Yang The Chinese University of Hong Kong
  • Kaili Ma The Chinese University of Hong Kong
  • James Cheng The Chinese University of Hong Kong

DOI:

https://doi.org/10.1609/aaai.v35i5.16586

Keywords:

Graph Mining, Social Network Analysis & Community, Graph-based Machine Learning

Abstract

The graph Laplacian regularization term is usually used in semi-supervised representation learning to provide graph structure information for a model f(X). However, with the recent popularity of graph neural networks (GNNs), directly encoding graph structure A into a model, i.e., f(A, X), has become the more common approach. While we show that graph Laplacian regularization brings little-to-no benefit to existing GNNs, and propose a simple but non-trivial variant of graph Laplacian regularization, called Propagation-regularization (P-reg), to boost the performance of existing GNN models. We provide formal analyses to show that P-reg not only infuses extra information (that is not captured by the traditional graph Laplacian regularization) into GNNs, but also has the capacity equivalent to an infinite-depth graph convolutional network. We demonstrate that P-reg can effectively boost the performance of existing GNN models on both node-level and graph-level tasks across many different datasets.

Downloads

Published

2021-05-18

How to Cite

Yang, H., Ma, K., & Cheng, J. (2021). Rethinking Graph Regularization for Graph Neural Networks. Proceedings of the AAAI Conference on Artificial Intelligence, 35(5), 4573-4581. https://doi.org/10.1609/aaai.v35i5.16586

Issue

Section

AAAI Technical Track on Data Mining and Knowledge Management