CAG-GS: Consistent Anchor Guided Gaussian Splatting for Large-scale Scene Rendering
DOI:
https://doi.org/10.1609/aaai.v40i14.38120Abstract
Recently, 3D Gaussian Splatting for scene rendering has attracted much attention in computer vision and graphics, but generally suffers from large burdens of both computation and storage when handling large-scale scenes. Some existing works in literature employ a divide-and-conquer strategy for alleviating this issue, where an input large scene is divided into lots of local blocks, and each block is handled separately. However, such a strategy generally leads to limited performance due to the inevitable inconsistency among the 3D Gaussians from different blocks. To address this problem, we propose a Consistent Anchor Guided Gaussian Splatting for large-scale scene rendering under the divide-and-conquer strategy, called CAG-GS. In CAG-GS, a set of learnable anchors for each local block is injected with the corresponding semantic features from a pre-trained semantic segmentation model SAM2 through an explored semantic mapping module, and then these anchors are used to predict the attributes of 3D Gaussians. Moreover, we explore a coarse-to-fine training strategy for CAG-GS, where each local block is optimized independently while being guided by globally consistent semantics. Extensive experimental results on five large-scale scenes demonstrate the superiority of the proposed method over five state-of-the-art methods in most cases.Published
2026-03-14
How to Cite
Xu, S., & Dong, Q. (2026). CAG-GS: Consistent Anchor Guided Gaussian Splatting for Large-scale Scene Rendering. Proceedings of the AAAI Conference on Artificial Intelligence, 40(14), 11388-11396. https://doi.org/10.1609/aaai.v40i14.38120
Issue
Section
AAAI Technical Track on Computer Vision XI