Gated Attention Coding for Training High-Performance and Efficient Spiking Neural Networks

Authors

  • Xuerui Qiu Institute of Automation, Chinese Academy of Sciences School of Future Technology, University of Chinese Academy of Sciences University of Electronic Science and Technology of China
  • Rui-Jie Zhu University of California, Santa Cruz
  • Yuhong Chou Xi’an Jiaotong University
  • Zhaorui Wang University of Electronic Science and Technology of China
  • Liang-Jian Deng University of Electronic Science and Technology of China
  • Guoqi Li Institute of Automation, Chinese Academy of Sciences Peng Cheng Laboratory

DOI:

https://doi.org/10.1609/aaai.v38i1.27816

Keywords:

CMS: Neural Spike Coding, CMS: Agent Architectures, CMS: Applications, CMS: Other Foundations of Cognitive Modeling & Systems, ML: Bio-inspired Learning

Abstract

Spiking neural networks (SNNs) are emerging as an energy-efficient alternative to traditional artificial neural networks (ANNs) due to their unique spike-based event-driven nature. Coding is crucial in SNNs as it converts external input stimuli into spatio-temporal feature sequences. However, most existing deep SNNs rely on direct coding that generates powerless spike representation and lacks the temporal dynamics inherent in human vision. Hence, we introduce Gated Attention Coding (GAC), a plug-and-play module that leverages the multi-dimensional gated attention unit to efficiently encode inputs into powerful representations before feeding them into the SNN architecture. GAC functions as a preprocessing layer that does not disrupt the spike-driven nature of the SNN, making it amenable to efficient neuromorphic hardware implementation with minimal modifications. Through an observer model theoretical analysis, we demonstrate GAC's attention mechanism improves temporal dynamics and coding efficiency. Experiments on CIFAR10/100 and ImageNet datasets demonstrate that GAC achieves state-of-the-art accuracy with remarkable efficiency. Notably, we improve top-1 accuracy by 3.10% on CIFAR100 with only 6-time steps and 1.07% on ImageNet while reducing energy usage to 66.9% of the previous works. To our best knowledge, it is the first time to explore the attention-based dynamic coding scheme in deep SNNs, with exceptional effectiveness and efficiency on large-scale datasets. Code is available at https://github.com/bollossom/GAC.

Downloads

Published

2024-03-25

How to Cite

Qiu, X., Zhu, R.-J., Chou, Y., Wang, Z., Deng, L.-J., & Li, G. (2024). Gated Attention Coding for Training High-Performance and Efficient Spiking Neural Networks. Proceedings of the AAAI Conference on Artificial Intelligence, 38(1), 601-610. https://doi.org/10.1609/aaai.v38i1.27816

Issue

Section

AAAI Technical Track on Cognitive Modeling & Cognitive Systems