Neuromorphic Event Signal-Driven Network for Video De-raining
DOI:
https://doi.org/10.1609/aaai.v38i3.27957Keywords:
CV: Low Level & Physics-based VisionAbstract
Convolutional neural networks-based video de-raining methods commonly rely on dense intensity frames captured by CMOS sensors. However, the limited temporal resolution of these sensors hinders the capture of dynamic rainfall information, limiting further improvement in de-raining performance. This study aims to overcome this issue by incorporating the neuromorphic event signal into the video de-raining to enhance the dynamic information perception. Specifically, we first utilize the dynamic information from the event signal as prior knowledge, and integrate it into existing de-raining objectives to better constrain the solution space. We then design an optimization algorithm to solve the objective, and construct a de-raining network with CNNs as the backbone architecture using a modular strategy to mimic the optimization process. To further explore the temporal correlation of the event signal, we incorporate a spiking self-attention module into our network. By leveraging the low latency and high temporal resolution of the event signal, along with the spatial and temporal representation capabilities of convolutional and spiking neural networks, our model captures more accurate dynamic information and significantly improves de-raining performance. For example, our network achieves a 1.24dB improvement on the SynHeavy25 dataset compared to the previous state-of-the-art method, while utilizing only 39% of the parameters.Downloads
Published
2024-03-24
How to Cite
Ge, C., Fu, X., He, P., Wang, K., Cao, C., & Zha, Z.-J. (2024). Neuromorphic Event Signal-Driven Network for Video De-raining. Proceedings of the AAAI Conference on Artificial Intelligence, 38(3), 1878-1886. https://doi.org/10.1609/aaai.v38i3.27957
Issue
Section
AAAI Technical Track on Computer Vision II