LidarPainter: One-Step Away from Any Lidar View to Novel Guidance

Authors

  • Yuzhou Ji Shanghai Jiao Tong University
  • Ke Ma Shanghai Jiao Tong University
  • Hong Cai 51WORLD
  • Anchun Zhang 51WORLD
  • Lizhuang Ma Shanghai Jiao Tong University
  • Xin Tan East China Normal University

DOI:

https://doi.org/10.1609/aaai.v40i7.37449

Abstract

Dynamic driving scene reconstruction is of great importance in fields like digital twin system and autonomous driving simulation. However, unacceptable degradation occurs when the view deviates from the input trajectory, leading to corrupted background and vehicle models. To improve reconstruction quality on novel trajectory, existing methods are subject to various limitations including inconsistency, deformation, and time consumption. This paper proposes LidarPainter, a one-step diffusion model that recovers consistent driving views from sparse LiDAR condition and artifact-corrupted renderings in real-time, enabling high-fidelity lane shifts in driving scene reconstruction. Extensive experiments show that LidarPainter outperforms state-of-the-art methods in speed, quality and resource efficiency, specifically 7 × faster than StreetCrafter with only one fifth of GPU memory required. LidarPainter also supports stylized generation using text prompts such as “foggy” and “night”, allowing for a diverse expansion of the existing asset library.

Downloads

Published

2026-03-14

How to Cite

Ji, Y., Ma, K., Cai, H., Zhang, A., Ma, L., & Tan, X. (2026). LidarPainter: One-Step Away from Any Lidar View to Novel Guidance. Proceedings of the AAAI Conference on Artificial Intelligence, 40(7), 5332–5340. https://doi.org/10.1609/aaai.v40i7.37449

Issue

Section

AAAI Technical Track on Computer Vision IV