Learning to Super-resolve Dynamic Scenes for Neuromorphic Spike Camera
DOI:
https://doi.org/10.1609/aaai.v37i3.25468Keywords:
CV: Computational Photography, Image & Video Synthesis, CV: Low Level & Physics-Based VisionAbstract
Spike camera is a kind of neuromorphic sensor that uses a novel ``integrate-and-fire'' mechanism to generate a continuous spike stream to record the dynamic light intensity at extremely high temporal resolution. However, as a trade-off for high temporal resolution, its spatial resolution is limited, resulting in inferior reconstruction details. To address this issue, this paper develops a network (SpikeSR-Net) to super-resolve a high-resolution image sequence from the low-resolution binary spike streams. SpikeSR-Net is designed based on the observation model of spike camera and exploits both the merits of model-based and learning-based methods. To deal with the limited representation capacity of binary data, a pixel-adaptive spike encoder is proposed to convert spikes to latent representation to infer clues on intensity and motion. Then, a motion-aligned super resolver is employed to exploit long-term correlation, so that the dense sampling in temporal domain can be exploited to enhance the spatial resolution without introducing motion blur. Experimental results show that SpikeSR-Net is promising in super-resolving higher-quality images for spike camera.Downloads
Published
2023-06-26
How to Cite
Zhao, J., Xiong, R., Zhang, J., Zhao, R., Liu, H., & Huang, T. (2023). Learning to Super-resolve Dynamic Scenes for Neuromorphic Spike Camera. Proceedings of the AAAI Conference on Artificial Intelligence, 37(3), 3579-3587. https://doi.org/10.1609/aaai.v37i3.25468
Issue
Section
AAAI Technical Track on Computer Vision III