Learning Graph Neural Networks with Approximate Gradient Descent

Authors

  • Qunwei Li Ant Group, Hangzhou, China
  • Shaofeng Zou University at Buffalo, the State University of New York
  • Wenliang Zhong Ant Group, Hangzhou, China

DOI:

https://doi.org/10.1609/aaai.v35i10.17025

Keywords:

(Deep) Neural Network Learning Theory, Graph-based Machine Learning

Abstract

The first provably efficient algorithm for learning graph neural networks (GNNs) with one hidden layer for node information convolution is provided in this paper. Two types of GNNs are investigated, depending on whether labels are attached to nodes or graphs. A comprehensive framework for designing and analyzing convergence of GNN training algorithms is developed. The algorithm proposed is applicable to a wide range of activation functions including ReLU, Leaky ReLU, Sigmod, Softplus and Swish. It is shown that the proposed algorithm guarantees a linear convergence rate to the underlying true parameters of GNNs. For both types of GNNs, sample complexity in terms of the number of nodes or the number of graphs is characterized. The impact of feature dimension and GNN structure on the convergence rate is also theoretically characterized. Numerical experiments are further provided to validate our theoretical analysis.

Downloads

Published

2021-05-18

How to Cite

Li, Q. ., Zou, S., & Zhong, W. (2021). Learning Graph Neural Networks with Approximate Gradient Descent. Proceedings of the AAAI Conference on Artificial Intelligence, 35(10), 8438-8446. https://doi.org/10.1609/aaai.v35i10.17025

Issue

Section

AAAI Technical Track on Machine Learning III