Ced-NeRF: A Compact and Efficient Method for Dynamic Neural Radiance Fields

Authors

  • Youtian Lin Nanjing University Harbin Institute of Technology

DOI:

https://doi.org/10.1609/aaai.v38i4.28138

Keywords:

CV: 3D Computer Vision, CV: Computational Photography, Image & Video Synthesis

Abstract

Rendering photorealistic dynamic scenes has been a focus of recent research, with applications in virtual and augmented reality. While the Neural Radiance Field (NeRF) has shown remarkable rendering quality for static scenes, achieving real-time rendering of dynamic scenes remains challenging due to expansive computation for the time dimension. The incorporation of explicit-based methods, specifically voxel grids, has been proposed to accelerate the training and rendering of neural radiance fields with hybrid representation. However, employing a hybrid representation for dynamic scenes results in overfitting due to fast convergence, which can result in artifacts (e.g., floaters, noisy geometric) on novel views. To address this, we propose a compact and efficient method for dynamic neural radiance fields, namely Ced-NeRF which only require a small number of additional parameters to construct a hybrid representation of dynamic NeRF. Evaluation of dynamic scene datasets shows that our Ced-NeRF achieves fast rendering speeds while maintaining high-quality rendering results. Our method outperforms the current state-of-the-art methods in terms of quality, training and rendering speed.

Published

2024-03-24

How to Cite

Lin, Y. (2024). Ced-NeRF: A Compact and Efficient Method for Dynamic Neural Radiance Fields. Proceedings of the AAAI Conference on Artificial Intelligence, 38(4), 3504-3512. https://doi.org/10.1609/aaai.v38i4.28138

Issue

Section

AAAI Technical Track on Computer Vision III