Enhancing Training of Spiking Neural Network with Stochastic Latency

Authors

  • Srinivas Anumasa Mohamed bin Zayed University of Artificial Intelligence, UAE
  • Bhaskar Mukhoty Mohamed bin Zayed University of Artificial Intelligence, UAE
  • Velibor Bojkovic Mohamed bin Zayed University of Artificial Intelligence, UAE
  • Giulia De Masi ARRC, Technology Innovation Institute, UAE BioRobotics Institute, Sant’Anna School of Advanced Studies Pisa, Italy
  • Huan Xiong Mohamed bin Zayed University of Artificial Intelligence, UAE Harbin Institute of Technology, China
  • Bin Gu Mohamed bin Zayed University of Artificial Intelligence, UAE School of Artificial Intelligence, Jilin University, China

DOI:

https://doi.org/10.1609/aaai.v38i10.28964

Keywords:

ML: Deep Learning Algorithms, ML: Bio-inspired Learning, ML: Deep Neural Architectures and Foundation Models

Abstract

Spiking neural networks (SNNs) have garnered significant attention for their low power consumption when deployed on neuromorphic hardware that operates in orders of magnitude lower power than general-purpose hardware. Direct training methods for SNNs come with an inherent latency for which the SNNs are optimized, and in general, the higher the latency, the better the predictive powers of the models, but at the same time, the higher the energy consumption during training and inference. Furthermore, an SNN model optimized for one particular latency does not necessarily perform well in lower latencies, which becomes relevant in scenarios where it is necessary to switch to a lower latency because of the depletion of onboard energy or other operational requirements. In this work, we propose Stochastic Latency Training (SLT), a direct training method for SNNs that optimizes the model for the given latency but simultaneously offers a minimum reduction of predictive accuracy when shifted to lower inference latencies. We provide heuristics for our approach with partial theoretical justification and experimental evidence showing the state-of-the-art performance of our models on datasets such as CIFAR-10, DVS-CIFAR-10, CIFAR-100, and DVS-Gesture. Our code is available at https://github.com/srinuvaasu/SLT

Published

2024-03-24

How to Cite

Anumasa, S., Mukhoty, B., Bojkovic, V., De Masi, G., Xiong, H., & Gu, B. (2024). Enhancing Training of Spiking Neural Network with Stochastic Latency. Proceedings of the AAAI Conference on Artificial Intelligence, 38(10), 10900–10908. https://doi.org/10.1609/aaai.v38i10.28964

Issue

Section

AAAI Technical Track on Machine Learning I