Spikingformer: A Key Foundation Model for Spiking Neural Networks

Authors

  • Chenlin Zhou School of Electronic and Computer Engineering, Shenzhen Graduate School, Peking University Pengcheng Laboratory
  • Liutao Yu Pengcheng Laboratory
  • Zhaokun Zhou School of Electronic and Computer Engineering, Shenzhen Graduate School, Peking University
  • Han Zhang Harbin Institute of Technology
  • Jiaqi Wang Harbin Institute of Technology Pengcheng Laboratory
  • Huihui Zhou Pengcheng Laboratory
  • Zhengyu Ma Pengcheng Laboratory
  • Yonghong Tian School of Electronic and Computer Engineering, Shenzhen Graduate School, Peking University Pengcheng Laboratory School of Computer Science, Peking University

DOI:

https://doi.org/10.1609/aaai.v40i3.37207

Abstract

Spiking neural networks (SNNs) offer a promising energy-efficient alternative to artificial neural networks, due to their event-driven spiking computation. However, some foundation SNN backbones (including Spikformer and SEW ResNet) suffer from non-spike computations (integer-float multiplications) caused by the structure of their residual connections. These non-spike computations increase SNNs' power consumption and make them unsuitable for deployment on mainstream neuromorphic hardware. In this paper, we analyze the spike-driven behavior of the residual connection methods in SNNs. We then present Spikingformer, a novel spiking transformer backbone that merges the MS Residual connection with Self-Attention in a biologically plausible way to address the non-spike computation challenge in Spikformer while maintaining global modeling capabilities. We evaluate Spikingformer across 13 datasets spanning large static images, neuromorphic data, and natural language tasks, and demonstrate the effectiveness and universality of Spikingformer, setting a vital benchmark for spiking neural networks. In addition, with the spike-driven features and global modeling capabilities, Spikingformer is expected to become a more efficient general-purpose SNN backbone towards energy-efficient artificial intelligence.

Published

2026-03-14

How to Cite

Zhou, C., Yu, L., Zhou, Z., Zhang, H., Wang, J., Zhou, H., Ma, Z., & Tian, Y. (2026). Spikingformer: A Key Foundation Model for Spiking Neural Networks. Proceedings of the AAAI Conference on Artificial Intelligence, 40(3), 2236-2244. https://doi.org/10.1609/aaai.v40i3.37207

Issue

Section

AAAI Technical Track on Cognitive Modeling & Cognitive Systems