Bidirectional Optical Flow NeRF: High Accuracy and High Quality under Fewer Views

Authors

  • Shuo Chen Beijing University of Posts and Telecommunications
  • Binbin Yan Beijing University of Posts and Telecommunications
  • Xinzhu Sang Beijing University of Posts and Telecommunications
  • Duo Chen Beijing University of Posts and Telecommunications
  • Peng Wang Beijing University of Posts and Telecommunications
  • Xiao Guo Beijing University of Posts and Telecommunications
  • Chongli Zhong Beijing University of Posts and Telecommunications
  • Huaming Wan Beijing University of Posts and Telecommunications

DOI:

https://doi.org/10.1609/aaai.v37i1.25109

Keywords:

CV: 3D Computer Vision, CV: Computational Photography, Image & Video Synthesis

Abstract

Neural Radiance Fields (NeRF) can implicitly represent 3D-consistent RGB images and geometric by optimizing an underlying continuous volumetric scene function using a sparse set of input views, which has greatly benefited view synthesis tasks. However, NeRF fails to estimate correct geometry when given fewer views, resulting in failure to synthesize novel views. Existing works rely on introducing depth images or adding depth estimation networks to resolve the problem of poor synthetic view in NeRF with fewer views. However, due to the lack of spatial consistency of the single-depth image and the poor performance of depth estimation with fewer views, the existing methods still have challenges in addressing this problem. So this paper proposes Bidirectional Optical Flow NeRF(BOF-NeRF), which addresses this problem by mining optical flow information between 2D images. Our key insight is that utilizing 2D optical flow images to design a loss can effectively guide NeRF to learn the correct geometry and synthesize the right novel view. We also propose a view-enhanced fusion method based on geometry and color consistency to solve the problem of novel view details loss in NeRF. We conduct extensive experiments on the NeRF-LLFF and DTU MVS benchmarks for novel view synthesis tasks with fewer images in different complex real scenes. We further demonstrate the robustness of BOF-NeRF under different baseline distances on the Middlebury dataset. In all cases, BOF-NeRF outperforms current state-of-the-art baselines for novel view synthesis and scene geometry estimation.

Downloads

Published

2023-06-26

How to Cite

Chen, S., Yan, B., Sang, X., Chen, D., Wang, P., Guo, X., Zhong, C., & Wan, H. (2023). Bidirectional Optical Flow NeRF: High Accuracy and High Quality under Fewer Views. Proceedings of the AAAI Conference on Artificial Intelligence, 37(1), 359-368. https://doi.org/10.1609/aaai.v37i1.25109

Issue

Section

AAAI Technical Track on Computer Vision I