Fast and Deep Graph Neural Networks

Authors

  • Claudio Gallicchio University of Pisa
  • Alessio Micheli University of Pisa

DOI:

https://doi.org/10.1609/aaai.v34i04.5803

Abstract

We address the efficiency issue for the construction of a deep graph neural network (GNN). The approach exploits the idea of representing each input graph as a fixed point of a dynamical system (implemented through a recurrent neural network), and leverages a deep architectural organization of the recurrent units. Efficiency is gained by many aspects, including the use of small and very sparse networks, where the weights of the recurrent units are left untrained under the stability condition introduced in this work. This can be viewed as a way to study the intrinsic power of the architecture of a deep GNN, and also to provide insights for the set-up of more complex fully-trained models. Through experimental results, we show that even without training of the recurrent connections, the architecture of small deep GNN is surprisingly able to achieve or improve the state-of-the-art performance on a significant set of tasks in the field of graphs classification.

Downloads

Published

2020-04-03

How to Cite

Gallicchio, C., & Micheli, A. (2020). Fast and Deep Graph Neural Networks. Proceedings of the AAAI Conference on Artificial Intelligence, 34(04), 3898-3905. https://doi.org/10.1609/aaai.v34i04.5803

Issue

Section

AAAI Technical Track: Machine Learning