Spatial Annealing for Efficient Few-shot Neural Rendering

Authors

  • Yuru Xiao Harbin Institute of Technology
  • Deming Zhai Harbin Institute of Technology
  • Wenbo Zhao Harbin Institute of Technology
  • Kui Jiang Harbin Institute of Technology
  • Junjun Jiang Harbin Institute of Technology
  • Xianming Liu Harbin Institute of Technology

DOI:

https://doi.org/10.1609/aaai.v39i8.32939

Abstract

Neural Radiance Fields (NeRF) with hybrid representations have shown impressive capabilities for novel view synthesis, delivering high efficiency. Nonetheless, their performance significantly drops with sparse input views. Various regularization strategies have been devised to address these challenges. However, these strategies either require additional rendering costs or involve complex pipeline designs, leading to a loss of training efficiency. Although FreeNeRF has introduced an efficient frequency annealing strategy, its operation on frequency positional encoding is incompatible with the efficient hybrid representations. In this paper, we introduce an accurate and efficient few-shot neural rendering method named Spatial Annealing regularized NeRF (SANeRF), which adopts the pre-filtering design of a hybrid representation. We initially establish the analytical formulation of the frequency band limit for a hybrid architecture by deducing its filtering process. Based on this analysis, we propose a universal form of frequency annealing in the spatial domain, which can be implemented by modulating the sampling kernel to exponentially shrink from an initial one with a narrow grid tangent kernel spectrum. This methodology is crucial for stabilizing the early stages of the training phase and significantly contributes to enhancing the subsequent process of detail refinement. Our extensive experiments reveal that, by adding merely one line of code, SANeRF delivers superior rendering quality and much faster reconstruction speed compared to current few-shot neural rendering methods. Notably, SANeRF outperforms FreeNeRF on the Blender dataset, achieving 700X faster reconstruction speed.

Downloads

Published

2025-04-11

How to Cite

Xiao, Y., Zhai, D., Zhao, W., Jiang, K., Jiang, J., & Liu, X. (2025). Spatial Annealing for Efficient Few-shot Neural Rendering. Proceedings of the AAAI Conference on Artificial Intelligence, 39(8), 8691–8699. https://doi.org/10.1609/aaai.v39i8.32939

Issue

Section

AAAI Technical Track on Computer Vision VII