FatesGS: Fast and Accurate Sparse-View Surface Reconstruction Using Gaussian Splatting with Depth-Feature Consistency

Authors

  • Han Huang Beijing National Research Center for Information Science and Technology (BNRist), Tsinghua University, Beijing, China School of Software, Tsinghua University, Beijing, China
  • Yulun Wu Beijing National Research Center for Information Science and Technology (BNRist), Tsinghua University, Beijing, China School of Software, Tsinghua University, Beijing, China
  • Chao Deng Beijing National Research Center for Information Science and Technology (BNRist), Tsinghua University, Beijing, China School of Software, Tsinghua University, Beijing, China
  • Ge Gao Beijing National Research Center for Information Science and Technology (BNRist), Tsinghua University, Beijing, China School of Software, Tsinghua University, Beijing, China
  • Ming Gu Beijing National Research Center for Information Science and Technology (BNRist), Tsinghua University, Beijing, China School of Software, Tsinghua University, Beijing, China
  • Yu-Shen Liu School of Software, Tsinghua University, Beijing, China

DOI:

https://doi.org/10.1609/aaai.v39i4.32379

Abstract

Recently, Gaussian Splatting has sparked a new trend in the field of computer vision. Apart from novel view synthesis, it has also been extended to the area of multi-view reconstruction. The latest methods facilitate complete, detailed surface reconstruction while ensuring fast training speed. However, these methods still require dense input views, and their output quality significantly degrades with sparse views. We observed that the Gaussian primitives tend to overfit the few training views, leading to noisy floaters and incomplete reconstruction surfaces. In this paper, we present an innovative sparse-view reconstruction framework that leverages intra-view depth and multi-view feature consistency to achieve remarkably accurate surface reconstruction. Specifically, we utilize monocular depth ranking information to supervise the consistency of depth distribution within patches and employ a smoothness loss to enhance the continuity of the distribution. To achieve finer surface reconstruction, we optimize the absolute position of depth through multi-view projection features. Extensive experiments on DTU and BlendedMVS demonstrate that our method outperforms state-of-the-art methods with a speedup of 60x to 200x, achieving swift and fine-grained mesh reconstruction without the need for costly pre-training.

Published

2025-04-11

How to Cite

Huang, H., Wu, Y., Deng, C., Gao, G., Gu, M., & Liu, Y.-S. (2025). FatesGS: Fast and Accurate Sparse-View Surface Reconstruction Using Gaussian Splatting with Depth-Feature Consistency. Proceedings of the AAAI Conference on Artificial Intelligence, 39(4), 3644–3652. https://doi.org/10.1609/aaai.v39i4.32379

Issue

Section

AAAI Technical Track on Computer Vision III