SNN-Driven Event-Based Flow and Rotation Estimation with SO(3) Refinement

Authors

  • Ruimin Sun Zhejiang University
  • Haoran Xu Zhejiang University
  • De Ma Zhejiang University

DOI:

https://doi.org/10.1609/aaai.v40i11.37877

Abstract

Spiking Neural Networks (SNNs) offer a promising direction for energy-efficient event-based vision by leveraging sparse, temporally precise spikes. We propose a directly trained, fully spiking model for optical flow estimation, featuring a novel Spike GRU and membrane potential carryover for improved temporal modeling. On the DSEC-Flow benchmark, our model achieves competitive accuracy while reducing energy consumption by 42.88× over EV-FlowNet and 38× over TIDNet. Building on the predicted motion field, we infer camera rotation and, to the best of our knowledge, are the first to construct panoramic event images from SNN-based flow. We further introduce an optional unsupervised SO(3) refinement step that improves rotation accuracy by maximizing panorama consistency—without IMU or pose supervision. Our results achieve comparable visual quality to CMax-SLAM, showing that SNNs can enable fast and high-level spatial perception using only event-based input.

Downloads

Published

2026-03-14

How to Cite

Sun, R., Xu, H., & Ma, D. (2026). SNN-Driven Event-Based Flow and Rotation Estimation with SO(3) Refinement. Proceedings of the AAAI Conference on Artificial Intelligence, 40(11), 9198-9205. https://doi.org/10.1609/aaai.v40i11.37877

Issue

Section

AAAI Technical Track on Computer Vision VIII