GFlow: Recovering 4D World from Monocular Video
DOI:
https://doi.org/10.1609/aaai.v39i8.32847Abstract
Recovering 4D world from monocular video is a crucial yet challenging task. Conventional methods usually rely on the assumptions of multi-view videos, known camera parameters, or static scenes. In this paper, we relax all these constraints and tackle a highly ambitious but practical task: With only one monocular video without camera parameters, we aim to recover the dynamic 3D world alongside the camera poses. To solve this, we introduce GFlow, a new framework that utilizes only 2D priors (depth and optical flow) to lift a video to a 4D scene, as a flow of 3D Gaussians through space and time. GFlow starts by segmenting the video into still and moving parts, then alternates between optimizing camera poses and the dynamics of the 3D Gaussian points. This method ensures consistency among adjacent points and smooth transitions between frames. Since dynamic scenes always continually introduce new visual content, we present prior-driven initialization and pixel-wise densification strategy for Gaussian points to integrate new content. By combining all those techniques, GFlow transcends the boundaries of 4D recovery from causal videos; it naturally enables tracking of points and segmentation of moving objects across frames. Additionally, GFlow estimates the camera poses for each frame, enabling novel view synthesis by changing camera pose. This capability facilitates extensive scene-level or object-level editing, highlighting GFlow's versatility and effectiveness.Published
2025-04-11
How to Cite
Wang, S., Yang, X., Shen, Q., Jiang, Z., & Wang, X. (2025). GFlow: Recovering 4D World from Monocular Video. Proceedings of the AAAI Conference on Artificial Intelligence, 39(8), 7862–7870. https://doi.org/10.1609/aaai.v39i8.32847
Issue
Section
AAAI Technical Track on Computer Vision VII