SC-NeuS: Consistent Neural Surface Reconstruction from Sparse and Noisy Views
DOI:
https://doi.org/10.1609/aaai.v38i3.28010Keywords:
CV: 3D Computer Vision, CV: Learning & Optimization for CVAbstract
The recent neural surface reconstruction approaches using volume rendering have made much progress by achieving impressive surface reconstruction quality, but are still limited to dense and highly accurate posed views. To overcome such drawbacks, this paper pays special attention on the consistent surface reconstruction from sparse views with noisy camera poses. Unlike previous approaches, the key difference of this paper is to exploit the multi-view constraints directly from the explicit geometry of the neural surface, which can be used as effective regularization to jointly learn the neural surface and refine the camera poses. To build effective multi-view constraints, we introduce a fast differentiable on-surface intersection to generate on-surface points, and propose view-consistent losses on such differentiable points to regularize the neural surface learning. Based on this point, we propose a joint learning strategy, named SC-NeuS, to perform geometry-consistent surface reconstruction in an end-to-end manner. With extensive evaluation on public datasets, our SC-NeuS can achieve consistently better surface reconstruction results with fine-grained details than previous approaches, especially from sparse and noisy camera views. The source code is available at https://github.com/zouzx/sc-neus.git.Downloads
Published
2024-03-24
How to Cite
Huang, S.-S., Zou, Z., Zhang, Y., Cao, Y.-P., & Shan, Y. (2024). SC-NeuS: Consistent Neural Surface Reconstruction from Sparse and Noisy Views. Proceedings of the AAAI Conference on Artificial Intelligence, 38(3), 2357-2365. https://doi.org/10.1609/aaai.v38i3.28010
Issue
Section
AAAI Technical Track on Computer Vision II