Seeing the Unseen: Zooming in the Dark with Event Cameras

Authors

  • Dachun Kai MoE Key Laboratory of Brain-inspired Intelligent Perception and Cognition, University of Science and Technology of China
  • Zeyu Xiao National University of Singapore
  • Huyue Zhu MoE Key Laboratory of Brain-inspired Intelligent Perception and Cognition, University of Science and Technology of China
  • Jiaxiao Wang MoE Key Laboratory of Brain-inspired Intelligent Perception and Cognition, University of Science and Technology of China
  • Yueyi Zhang Miromind AI
  • Xiaoyan Sun MoE Key Laboratory of Brain-inspired Intelligent Perception and Cognition, University of Science and Technology of China Institute of Artificial Intelligence, Hefei Comprehensive National Science Center

DOI:

https://doi.org/10.1609/aaai.v40i7.37478

Abstract

This paper addresses low-light video super-resolution (LVSR), aiming to restore high-resolution videos from low-light, low-resolution (LR) inputs. Existing LVSR methods often struggle to recover fine details due to limited contrast and insufficient high-frequency information. To overcome these challenges, we present RetinexEVSR, the first event-driven LVSR framework that leverages high-contrast event signals and Retinex-inspired priors to enhance video quality under low-light scenarios. Unlike previous approaches that directly fuse degraded signals, RetinexEVSR introduces a novel bidirectional cross-modal fusion strategy to extract and integrate meaningful cues from noisy event data and degraded RGB frames. Specifically, an illumination-guided event enhancement module is designed to progressively refine event features using illumination maps derived from the Retinex model, thereby suppressing low-light artifacts while preserving high-contrast details. Furthermore, we propose an event-guided reflectance enhancement module that utilizes the enhanced event features to dynamically recover reflectance details via a multi-scale fusion mechanism. Experimental results show that our RetinexEVSR achieves state-of-the-art performance on three datasets. Notably, on the SDSD benchmark, our method can get up to 2.95 dB gain while reducing runtime by 65% compared to prior event-based methods.

Downloads

Published

2026-03-14

How to Cite

Kai, D., Xiao, Z., Zhu, H., Wang, J., Zhang, Y., & Sun, X. (2026). Seeing the Unseen: Zooming in the Dark with Event Cameras. Proceedings of the AAAI Conference on Artificial Intelligence, 40(7), 5593–5601. https://doi.org/10.1609/aaai.v40i7.37478

Issue

Section

AAAI Technical Track on Computer Vision IV