Novel View Synthesis Under Large-Deviation Viewpoint for Autonomous Driving

Authors

  • Xin Ma Beijing University of Posts and Telecommunications
  • Jiguang Zhang Institute of Automation, Chinese Academy of Science
  • Peng Lu Beijing University of Posts and Telecommunications
  • Shibiao Xu Beijing University of Posts and Telecommunications
  • Chengwei Pan Beihang Uinveristy

DOI:

https://doi.org/10.1609/aaai.v39i6.32641

Abstract

Novel view synthesis is a critical task in autonomous driving. Although 3D Gaussian Splatting (3D-GS) has shown success in generating novel views, it faces challenges in maintaining high-quality rendering when viewpoints deviate significantly from the training set. This difficulty primarily stems from complex lighting conditions and geometric inconsistencies in texture-less regions. To address these issues, we propose an attention-based illumination model that leverages light fields from neighboring views, enhancing the realism of synthesized images. Additionally, we propose a geometry optimization method using planar homography to improve geometric consistency in texture-less regions. Our experiments demonstrate substantial improvements in synthesis quality for large-deviation viewpoints, validating the effectiveness of our approach.

Downloads

Published

2025-04-11

How to Cite

Ma, X., Zhang, J., Lu, P., Xu, S., & Pan, C. (2025). Novel View Synthesis Under Large-Deviation Viewpoint for Autonomous Driving. Proceedings of the AAAI Conference on Artificial Intelligence, 39(6), 6000–6008. https://doi.org/10.1609/aaai.v39i6.32641

Issue

Section

AAAI Technical Track on Computer Vision V