Overcoming Catastrophic Forgetting in Graph Neural Networks

Authors

  • Huihui Liu Stevens Institute of Technology
  • Yiding Yang Stevens Institute of Technology
  • Xinchao Wang Stevens Institute of Technology

DOI:

https://doi.org/10.1609/aaai.v35i10.17049

Keywords:

(Deep) Neural Network Algorithms, Graph-based Machine Learning, Graph Mining, Social Network Analysis & Community, Applications

Abstract

Catastrophic forgetting refers to the tendency that a neural network ``forgets'' the previous learned knowledge upon learning new tasks. Prior methods have been focused on overcoming this problem on convolutional neural networks (CNNs), where the input samples like images lie in a grid domain, but have largely overlooked graph neural networks (GNNs) that handle non-grid data. In this paper, we propose a novel scheme dedicated to overcoming catastrophic forgetting problem and hence strengthen continual learning in GNNs. At the heart of our approach is a generic module, termed as topology-aware weight preserving (TWP), applicable to arbitrary form of GNNs in a plug-and-play fashion. Unlike the main stream of CNN-based continual learning methods that rely on solely slowing down the updates of parameters important to the downstream task, TWP explicitly explores the local structures of the input graph, and attempts to stabilize the parameters playing pivotal roles in the topological aggregation. We evaluate TWP on different GNN backbones over several datasets, and demonstrate that it yields performances superior to the state of the art. Code is publicly available at https://github.com/hhliu79/TWP.

Downloads

Published

2021-05-18

How to Cite

Liu, H., Yang, Y., & Wang, X. (2021). Overcoming Catastrophic Forgetting in Graph Neural Networks. Proceedings of the AAAI Conference on Artificial Intelligence, 35(10), 8653-8661. https://doi.org/10.1609/aaai.v35i10.17049

Issue

Section

AAAI Technical Track on Machine Learning III