AnchorDS: Anchoring Dynamic Sources for Semantically Consistent Text-to-3D Generation

Authors

  • Jiayin Zhu National University of Singapore
  • Linlin Yang Communication University of China
  • Yicong Li National University of Singapore
  • Angela Yao National University of Singapore

DOI:

https://doi.org/10.1609/aaai.v40i16.38404

Abstract

Optimization‐based text‑to‑3D methods distill guidance from 2D generative models via Score Distillation Sampling (SDS), but implicitly treat this guidance as static. This work shows that ignoring source dynamics yields inconsistent trajectories that suppress or merge semantic cues, leading to "semantic over-smoothing" artifacts. As such, we reformulate text‑to‑3D optimization as mapping a *dynamically evolving source* distribution to a fixed target distribution. We cast the problem into a dual‑conditioned latent space, conditioned on both the text prompt and the intermediately rendered image. Given this joint setup, we observe that the image condition naturally anchors the current source distribution. Building on this insight, we introduce AnchorDS, an improved score distillation mechanism that provides state‑anchored guidance with image conditions and stabilizes generation. We further penalize erroneous source estimates and design a lightweight filter strategy and fine‑tuning strategy that refines the anchor with negligible overhead. AnchorDS produces finer-grained detail, more natural colours, and stronger semantic consistency, particularly for complex prompts, while maintaining efficiency. Extensive experiments show that our method surpasses previous methods in both quality and efficiency.

Downloads

Published

2026-03-14

How to Cite

Zhu, J., Yang, L., Li, Y., & Yao, A. (2026). AnchorDS: Anchoring Dynamic Sources for Semantically Consistent Text-to-3D Generation. Proceedings of the AAAI Conference on Artificial Intelligence, 40(16), 13943–13951. https://doi.org/10.1609/aaai.v40i16.38404

Issue

Section

AAAI Technical Track on Computer Vision XIII