Deblur4DGS: 4D Gaussian Splatting from Blurry Monocular Video
DOI:
https://doi.org/10.1609/aaai.v40i13.38047Abstract
Recent 4D reconstruction methods have yielded impressive results but rely on sharp videos as supervision. However, motion blur often occurs in videos due to camera shake and object movement, while existing methods render blurry results when using such videos for reconstructing 4D models. Although a few approaches attempted to address the problem, they struggled to produce high-quality results, due to the inaccuracy in estimating continuous dynamic representations within the exposure time. Encouraged by recent works in 3D motion trajectory modeling using 3D Gaussian Splatting (3DGS), we take 3DGS as the scene representation manner, and propose Deblur4DGS to obtain a high-quality 4D model from blurry monocular video. Specifically, we transform continuous dynamic representations estimation within an exposure time into the exposure time estimation. Moreover, we introduce the exposure regularization term, multi-frame, and multi-resolution consistency regularization term to avoid trivial solutions. Furthermore, to better represent objects with large motion, we suggest blur-aware variable canonical Gaussians. Beyond novel-view synthesis, Deblur4DGS can be applied to improve blurry video from multiple perspectives, including deblurring, frame interpolation, and video stabilization. Extensive experiments in both synthetic and real-world data on the above four tasks show that Deblur4DGS outperforms state-of-the-art 4D reconstruction methods.Published
2026-03-14
How to Cite
Wu, R., Zhang, Z., Chen, M., Yan, Z., & Zuo, W. (2026). Deblur4DGS: 4D Gaussian Splatting from Blurry Monocular Video. Proceedings of the AAAI Conference on Artificial Intelligence, 40(13), 10727-10735. https://doi.org/10.1609/aaai.v40i13.38047
Issue
Section
AAAI Technical Track on Computer Vision X