Efficient 3D Recognition with Event-driven Spike Sparse Convolution

Authors

  • Xuerui Qiu Institute of Automation, Chinese Academy of Sciences School of Future Technology, University of Chinese Academy of Sciences
  • Man Yao Institute of Automation, Chinese Academy of Sciences
  • Jieyuan Zhang University of Electronic Science and Technology of China
  • Yuhong Chou Institute of Automation, Chinese Academy of Sciences The Hong Kong Polytechnic University
  • Ning Qiao SynSense AG Corporation
  • Shibo Zhou Huinao Zhixin
  • Bo Xu Institute of Automation, Chinese Academy of Sciences
  • Guoqi Li Institute of Automation, Chinese Academy of Sciences Peng Cheng Laboratory Institute of Automation, Key Laboratory of Brain Cognition and Brain-inspired Intelligence Technology, Chinese Academy of Sciences

DOI:

https://doi.org/10.1609/aaai.v39i19.34212

Abstract

Spiking Neural Networks (SNNs) provide an energy-efficient way to extract 3D spatio-temporal features. Point clouds are sparse 3D spatial data, which suggests that SNNs should be well-suited for processing them. However, when applying SNNs to point clouds, they often exhibit limited performance and fewer application scenarios. We attribute this to inappropriate preprocessing and feature extraction methods. To address this issue, we first introduce the Spike Voxel Coding (SVC) scheme, which encodes the 3D point clouds into a sparse spike train space, reducing the storage requirements and saving time on point cloud preprocessing. Then, we propose a Spike Sparse Convolution (SSC) model for efficiently extracting 3D sparse point cloud features. Combining SVC and SSC, we design an efficient 3D SNN backbone (E-3DSNN), which is friendly with neuromorphic hardware. For instance, SSC can be implemented on neuromorphic chips with only minor modifications to the addressing function of vanilla spike convolution. Experiments on ModelNet40, KITTI, and Semantic KITTI datasets demonstrate that E-3DSNN achieves state-of-the-art (SOTA) results with remarkable efficiency. Notably, our E-3DSNN (1.87M) obtained 91.7% top-1 accuracy on ModelNet40, surpassing the current best SNN baselines (14.3M) by 3.0%. To our best knowledge, it is the first direct training 3D SNN backbone that can simultaneously handle various 3D computer vision tasks (e.g., classification, detection, and segmentation) with an event-driven nature.

Downloads

Published

2025-04-11

How to Cite

Qiu, X., Yao, M., Zhang, J., Chou, Y., Qiao, N., Zhou, S., … Li, G. (2025). Efficient 3D Recognition with Event-driven Spike Sparse Convolution. Proceedings of the AAAI Conference on Artificial Intelligence, 39(19), 20086–20094. https://doi.org/10.1609/aaai.v39i19.34212

Issue

Section

AAAI Technical Track on Machine Learning V