Going Deeper With Directly-Trained Larger Spiking Neural Networks

Authors

  • Hanle Zheng Center for Brain-Inspired Computing Research, Department of Precision Instrument, Tsinghua University, Beijing 100084, China
  • Yujie Wu Center for Brain-Inspired Computing Research, Department of Precision Instrument, Tsinghua University, Beijing 100084, China
  • Lei Deng Center for Brain-Inspired Computing Research, Department of Precision Instrument, Tsinghua University, Beijing 100084, China Department of Electrical and Computer Engineering, University of California, Santa Barbara
  • Yifan Hu Center for Brain-Inspired Computing Research, Department of Precision Instrument, Tsinghua University, Beijing 100084, China
  • Guoqi Li Center for Brain-Inspired Computing Research, Department of Precision Instrument, Tsinghua University, Beijing 100084, China Beijing Innovation Center for Future Chip, Tsinghua University, Beijing 100084, China

DOI:

https://doi.org/10.1609/aaai.v35i12.17320

Keywords:

Bio-inspired Learning, (Deep) Neural Network Algorithms, (Deep) Neural Network Learning Theory

Abstract

Spiking neural networks (SNNs) are promising in a bio-plausible coding for spatio-temporal information and event-driven signal processing, which is very suited for energy-efficient implementation in neuromorphic hardware. However, the unique working mode of SNNs makes them more difficult to train than traditional networks. Currently, there are two main routes to explore the training of deep SNNs with high performance. The first is to convert a pre-trained ANN model to its SNN version, which usually requires a long coding window for convergence and cannot exploit the spatio-temporal features during training for solving temporal tasks. The other is to directly train SNNs in the spatio-temporal domain. But due to the binary spike activity of the firing function and the problem of gradient vanishing or explosion, current methods are restricted to shallow architectures and thereby difficult in harnessing large-scale datasets (e.g. ImageNet). To this end, we propose a threshold-dependent batch normalization (tdBN) method based on the emerging spatio-temporal backpropagation, termed “STBP-tdBN”, enabling direct training of a very deep SNN and the efficient implementation of its inference on neuromorphic hardware. With the proposed method and elaborated shortcut connection, we significantly extend directly-trained SNNs from a shallow structure (<10 layer) to a very deep structure (50 layers). Furthermore, we theoretically analyze the effectiveness of our method based on “Block Dynamical Isometry” theory. Finally, we report superior accuracy results including 93.15% on CIFAR-10, 67.8% on DVS-CIFAR10, and 67.05% on ImageNet with very few timesteps. To our best knowledge, it’s the first time to explore the directly-trained deep SNNs with high performance on ImageNet. We believe this work shall pave the way of fully exploiting the advantages of SNNs and attract more researchers to contribute in this field.

Downloads

Published

2021-05-18

How to Cite

Zheng, H., Wu, Y., Deng, L., Hu, Y., & Li, G. (2021). Going Deeper With Directly-Trained Larger Spiking Neural Networks. Proceedings of the AAAI Conference on Artificial Intelligence, 35(12), 11062-11070. https://doi.org/10.1609/aaai.v35i12.17320

Issue

Section

AAAI Technical Track on Machine Learning V