Repetition Makes Perfect: Recurrent Graph Neural Networks Match Message Passing Limit

Authors

  • Eran Rosenbluth RWTH Aachen University
  • Martin Grohe RWTH Aachen University

DOI:

https://doi.org/10.1609/aaai.v40i30.39707

Abstract

We precisely characterize the expressivity of computable Recurrent Graph Neural Networks (recurrent GNNs). We prove that recurrent GNNs with finite-precision parameters, sum aggregation, and ReLU activation, can compute any graph algorithm that respects the natural message-passing invariance induced by the Color Refinement (or Weisfeiler-Leman) algorithm. While it is well known that the expressive power of GNNs is limited by this invariance [Morris et al., AAAI 2019; Xu et al., ICLR 2019], we establish that recurrent GNNs can actually match this limit. This is in contrast to non-recurrent GNNs, which have the power of Weisfeiler-Leman only in a very weak, "non-uniform", sense where each graph size requires a different GNN to compute with. Our construction introduces only a polynomial overhead in both time and space. Furthermore, we show that by incorporating random initialization, for connected graphs recurrent GNNs can express all graph algorithms. In particular, any polynomial-time graph algorithm can be emulated on connected graphs in polynomial time by a recurrent GNN with random initialization.

Published

2026-03-14

How to Cite

Rosenbluth, E., & Grohe, M. (2026). Repetition Makes Perfect: Recurrent Graph Neural Networks Match Message Passing Limit. Proceedings of the AAAI Conference on Artificial Intelligence, 40(30), 25168–25175. https://doi.org/10.1609/aaai.v40i30.39707

Issue

Section

AAAI Technical Track on Machine Learning VII