E2HQV: High-Quality Video Generation from Event Camera via Theory-Inspired Model-Aided Deep Learning

Authors

  • Qiang Qu The University of Sydney
  • Yiran Shen Shandong University
  • Xiaoming Chen Beijing Technology and Business University
  • Yuk Ying Chung The University of Sydney
  • Tongliang Liu The University of Sydney

DOI:

https://doi.org/10.1609/aaai.v38i5.28263

Keywords:

CV: Applications, CV: Computational Photography, Image & Video Synthesis

Abstract

The bio-inspired event cameras or dynamic vision sensors are capable of asynchronously capturing per-pixel brightness changes (called event-streams) in high temporal resolution and high dynamic range. However, the non-structural spatial-temporal event-streams make it challenging for providing intuitive visualization with rich semantic information for human vision. It calls for events-to-video (E2V) solutions which take event-streams as input and generate high quality video frames for intuitive visualization. However, current solutions are predominantly data-driven without considering the prior knowledge of the underlying statistics relating event-streams and video frames. It highly relies on the non-linearity and generalization capability of the deep neural networks, thus, is struggling on reconstructing detailed textures when the scenes are complex. In this work, we propose E2HQV, a novel E2V paradigm designed to produce high-quality video frames from events. This approach leverages a model-aided deep learning framework, underpinned by a theory-inspired E2V model, which is meticulously derived from the fundamental imaging principles of event cameras. To deal with the issue of state-reset in the recurrent components of E2HQV, we also design a temporal shift embedding module to further improve the quality of the video frames. Comprehensive evaluations on the real world event camera datasets validate our approach, with E2HQV, notably outperforming state-of-the-art approaches, e.g., surpassing the second best by over 40% for some evaluation metrics.

Published

2024-03-24

How to Cite

Qu, Q., Shen, Y., Chen, X., Chung, Y. Y., & Liu, T. (2024). E2HQV: High-Quality Video Generation from Event Camera via Theory-Inspired Model-Aided Deep Learning. Proceedings of the AAAI Conference on Artificial Intelligence, 38(5), 4632-4640. https://doi.org/10.1609/aaai.v38i5.28263

Issue

Section

AAAI Technical Track on Computer Vision IV