EvSTVSR: Event Guided Space-Time Video Super-Resolution

Authors

  • Haojie Yan Zhejiang University
  • Zhan Lu Nanyang Technological University
  • Zehao Chen Zhejiang University
  • De Ma Zhejiang University
  • Huajin Tang Zhejiang University
  • Qian Zheng Zhejiang University
  • Gang Pan Zhejiang University

DOI:

https://doi.org/10.1609/aaai.v39i9.32983

Abstract

In the domain of space-time video super-resolution, it is typically challenging to handle complex motions (including large and nonlinear motions) and varying illumination scenes due to the lack of inter-frame information. Leveraging the dense temporal information provided by event signals offers a promising solution. Traditional event-based methods typically rely on multiple images, using motion estimation and compensation, which can introduce errors. Accumulated errors from multiple frames often lead to artifacts and blurriness in the output. To mitigate these issues, we propose EvSTVSR, a method that uses fewer adjacent frames and integrates dense temporal information from events to guide alignment. Additionally, we introduce a coordinate-based feature fusion upsampling module to achieve spatial super-resolution. Experimental results demonstrate that our method not only outperforms existing RGB-based approaches but also excels in handling large motion scenarios.

Published

2025-04-11

How to Cite

Yan, H., Lu, Z., Chen, Z., Ma, D., Tang, H., Zheng, Q., & Pan, G. (2025). EvSTVSR: Event Guided Space-Time Video Super-Resolution. Proceedings of the AAAI Conference on Artificial Intelligence, 39(9), 9085-9093. https://doi.org/10.1609/aaai.v39i9.32983

Issue

Section

AAAI Technical Track on Computer Vision VIII