Efficient Spiking Neural Networks with Sparse Selective Activation for Continual Learning
DOI:
https://doi.org/10.1609/aaai.v38i1.27817Keywords:
CMS: (Computational) Cognitive ArchitecturesAbstract
The next generation of machine intelligence requires the capability of continual learning to acquire new knowledge without forgetting the old one while conserving limited computing resources. Spiking neural networks (SNNs), compared to artificial neural networks (ANNs), have more characteristics that align with biological neurons, which may be helpful as a potential gating function for knowledge maintenance in neural networks. Inspired by the selective sparse activation principle of context gating in biological systems, we present a novel SNN model with selective activation to achieve continual learning. The trace-based K-Winner-Take-All (K-WTA) and variable threshold components are designed to form the sparsity in selective activation in spatial and temporal dimensions of spiking neurons, which promotes the subpopulation of neuron activation to perform specific tasks. As a result, continual learning can be maintained by routing different tasks via different populations of neurons in the network. The experiments are conducted on MNIST and CIFAR10 datasets under the class incremental setting. The results show that the proposed SNN model achieves competitive performance similar to and even surpasses the other regularization-based methods deployed under traditional ANNs.Downloads
Published
2024-03-25
How to Cite
Shen, J., Ni, W., Xu, Q., & Tang, H. (2024). Efficient Spiking Neural Networks with Sparse Selective Activation for Continual Learning. Proceedings of the AAAI Conference on Artificial Intelligence, 38(1), 611-619. https://doi.org/10.1609/aaai.v38i1.27817
Issue
Section
AAAI Technical Track on Cognitive Modeling & Cognitive Systems