Low-Latency Space-Time Supersampling for Real-Time Rendering

Authors

  • Ruian He School of Computer Science, Shanghai Key Laboratory of Intelligent Information Processing, Fudan University
  • Shili Zhou School of Computer Science, Shanghai Key Laboratory of Intelligent Information Processing, Fudan University
  • Yuqi Sun School of Computer Science, Shanghai Key Laboratory of Intelligent Information Processing, Fudan University
  • Ri Cheng School of Computer Science, Shanghai Key Laboratory of Intelligent Information Processing, Fudan University
  • Weimin Tan School of Computer Science, Shanghai Key Laboratory of Intelligent Information Processing, Fudan University
  • Bo Yan School of Computer Science, Shanghai Key Laboratory of Intelligent Information Processing, Fudan University

DOI:

https://doi.org/10.1609/aaai.v38i3.27982

Keywords:

CV: Computational Photography, Image & Video Synthesis, CV: Low Level & Physics-based Vision

Abstract

With the rise of real-time rendering and the evolution of display devices, there is a growing demand for post-processing methods that offer high-resolution content in a high frame rate. Existing techniques often suffer from quality and latency issues due to the disjointed treatment of frame supersampling and extrapolation. In this paper, we recognize the shared context and mechanisms between frame supersampling and extrapolation, and present a novel framework, Space-time Supersampling (STSS). By integrating them into a unified framework, STSS can improve the overall quality with lower latency. To implement an efficient architecture, we treat the aliasing and warping holes unified as reshading regions and put forth two key components to compensate the regions, namely Random Reshading Masking (RRM) and Efficient Reshading Module (ERM). Extensive experiments demonstrate that our approach achieves superior visual fidelity compared to state-of-the-art (SOTA) methods. Notably, the performance is achieved within only 4ms, saving up to 75\% of time against the conventional two-stage pipeline that necessitates 17ms.

Published

2024-03-24

How to Cite

He, R., Zhou, S., Sun, Y., Cheng, R., Tan, W., & Yan, B. (2024). Low-Latency Space-Time Supersampling for Real-Time Rendering. Proceedings of the AAAI Conference on Artificial Intelligence, 38(3), 2103-2111. https://doi.org/10.1609/aaai.v38i3.27982

Issue

Section

AAAI Technical Track on Computer Vision II