A General Implicit Framework for Fast NeRF Composition and Rendering

Authors

  • Xinyu Gao State Key Lab of CAD&CG, Zhejiang University
  • Ziyi Yang State Key Lab of CAD&CG, Zhejiang University
  • Yunlu Zhao State Key Lab of CAD&CG, Zhejiang University
  • Yuxiang Sun Zhejiang Lab
  • Xiaogang Jin State Key Lab of CAD&CG, Zhejiang University
  • Changqing Zou State Key Lab of CAD&CG, Zhejiang University Zhejiang Lab

DOI:

https://doi.org/10.1609/aaai.v38i3.27952

Keywords:

CV: Applications, CV: 3D Computer Vision, CV: Computational Photography, Image & Video Synthesis, ML: Deep Learning Algorithms

Abstract

A variety of Neural Radiance Fields (NeRF) methods have recently achieved remarkable success in high render speed. However, current accelerating methods are specialized and incompatible with various implicit methods, preventing real-time composition over various types of NeRF works. Because NeRF relies on sampling along rays, it is possible to provide general guidance for acceleration. To that end, we propose a general implicit pipeline for composing NeRF objects quickly. Our method enables the casting of dynamic shadows within or between objects using analytical light sources while allowing multiple NeRF objects to be seamlessly placed and rendered together with any arbitrary rigid transformations. Mainly, our work introduces a new surface representation known as Neural Depth Fields (NeDF) that quickly determines the spatial relationship between objects by allowing direct intersection computation between rays and implicit surfaces. It leverages an intersection neural network to query NeRF for acceleration instead of depending on an explicit spatial structure.Our proposed method is the first to enable both the progressive and interactive composition of NeRF objects. Additionally, it also serves as a previewing plugin for a range of existing NeRF works.

Published

2024-03-24

How to Cite

Gao, X., Yang, Z., Zhao, Y., Sun, Y., Jin, X., & Zou, C. (2024). A General Implicit Framework for Fast NeRF Composition and Rendering. Proceedings of the AAAI Conference on Artificial Intelligence, 38(3), 1833–1841. https://doi.org/10.1609/aaai.v38i3.27952

Issue

Section

AAAI Technical Track on Computer Vision II