Learning from the Past: Continual Meta-Learning with Bayesian Graph Neural Networks


  • Yadan Luo The University of Queensland
  • Zi Huang The University of Queensland
  • Zheng Zhang Harbin Institute of Technology
  • Ziwei Wang The University of Queensland
  • Mahsa Baktashmotlagh The University of Queensland
  • Yang Yang University of Electronic Science and Technology of China




Meta-learning for few-shot learning allows a machine to leverage previously acquired knowledge as a prior, thus improving the performance on novel tasks with only small amounts of data. However, most mainstream models suffer from catastrophic forgetting and insufficient robustness issues, thereby failing to fully retain or exploit long-term knowledge while being prone to cause severe error accumulation. In this paper, we propose a novel Continual Meta-Learning approach with Bayesian Graph Neural Networks (CML-BGNN) that mathematically formulates meta-learning as continual learning of a sequence of tasks. With each task forming as a graph, the intra- and inter-task correlations can be well preserved via message-passing and history transition. To remedy topological uncertainty from graph initialization, we utilize Bayes by Backprop strategy that approximates the posterior distribution of task-specific parameters with amortized inference networks, which are seamlessly integrated into the end-to-end edge learning. Extensive experiments conducted on the miniImageNet and tieredImageNet datasets demonstrate the effectiveness and efficiency of the proposed method, improving the performance by 42.8% compared with state-of-the-art on the miniImageNet 5-way 1-shot classification task.




How to Cite

Luo, Y., Huang, Z., Zhang, Z., Wang, Z., Baktashmotlagh, M., & Yang, Y. (2020). Learning from the Past: Continual Meta-Learning with Bayesian Graph Neural Networks. Proceedings of the AAAI Conference on Artificial Intelligence, 34(04), 5021-5028. https://doi.org/10.1609/aaai.v34i04.5942



AAAI Technical Track: Machine Learning