I2E: Real-Time Image-to-Event Conversion for High-Performance Spiking Neural Networks
DOI:
https://doi.org/10.1609/aaai.v40i3.37179Abstract
Spiking neural networks (SNNs) promise highly energy-efficient computing, but their adoption is hindered by a critical scarcity of event-stream data. This work introduces I2E, an algorithmic framework that resolves this bottleneck by converting static images into high-fidelity event streams. By simulating microsaccadic eye movements with a highly parallelized convolution, I2E achieves a conversion speed over 300x faster than prior methods, uniquely enabling on-the-fly data augmentation for SNN training. The framework's effectiveness is demonstrated on large-scale benchmarks. An SNN trained on the generated I2E-ImageNet dataset achieves a state-of-the-art accuracy of 60.50%. Critically, this work establishes a powerful sim-to-real paradigm where pre-training on synthetic I2E data and fine-tuning on the real-world CIFAR10-DVS dataset yields an unprecedented accuracy of 92.5%. This result validates that synthetic event data can serve as a high-fidelity proxy for real sensor data, bridging a long-standing gap in neuromorphic engineering. By providing a scalable solution to the data problem, I2E offers a foundational toolkit for developing high-performance neuromorphic systems. The open-source algorithm and all generated datasets are provided to accelerate research in the field.Downloads
Published
2026-03-14
How to Cite
Ma, R., Meng, L., Qiao, G., Ning, N., Liu, Y., & Hu, S. (2026). I2E: Real-Time Image-to-Event Conversion for High-Performance Spiking Neural Networks. Proceedings of the AAAI Conference on Artificial Intelligence, 40(3), 1982-1990. https://doi.org/10.1609/aaai.v40i3.37179
Issue
Section
AAAI Technical Track on Cognitive Modeling & Cognitive Systems