BLiRF: Bandlimited Radiance Fields for Dynamic Scene Modeling

Authors

  • Sameera Ramasinghe Amazon
  • Violetta Shevchenko Amazon
  • Gil Avraham Amazon
  • Anton van den Hengel Amazon

DOI:

https://doi.org/10.1609/aaai.v38i5.28264

Keywords:

CV: 3D Computer Vision, CV: Motion & Tracking

Abstract

Inferring the 3D structure of a non-rigid dynamic scene from a single moving camera is an under-constrained problem. Inspired by the remarkable progress of neural radiance fields (NeRFs) in photo-realistic novel view synthesis of static scenes, it has also been extended to dynamic settings. Such methods heavily rely on implicit neural priors to regularize the problem. In this work, we take a step back and investigate how current implementations may entail deleterious effects including limited expressiveness, entanglement of light and density fields, and sub-optimal motion localization. Further, we devise a factorisation-based framework that represents the scene as a composition of bandlimited, high-dimensional signals. We demonstrate compelling results across complex dynamic scenes that involve changes in lighting, texture and long-range dynamics.

Published

2024-03-24

How to Cite

Ramasinghe, S., Shevchenko, V., Avraham, G., & van den Hengel, A. (2024). BLiRF: Bandlimited Radiance Fields for Dynamic Scene Modeling. Proceedings of the AAAI Conference on Artificial Intelligence, 38(5), 4641-4649. https://doi.org/10.1609/aaai.v38i5.28264

Issue

Section

AAAI Technical Track on Computer Vision IV