VGD: Value-Guided Diffusion Toward High-Utility Medical Image Segmentation
DOI:
https://doi.org/10.1609/aaai.v40i15.38246Abstract
Progress in medical image segmentation is fundamentally constrained by the scarcity of annotated data. While diffusion models offer a promising solution by generating high-fidelity image–mask pairs, their utility for downstream tasks remains underexplored. A key bottleneck lies in the misalignment between generation outputs and task-specific needs—samples are produced independently of their utility for downstream training. To this end, we propose Value-Guided Diffusion (VGD), a lightweight sampling framework that integrates downstream model feedback into the generative inference process. VGD estimates a value score for each sample based on its utility to downstream training, and leverages this signal to iteratively guide the denoising trajectory toward high-reward regions of the data manifold. Crucially, VGD can be seamlessly integrated into existing medical diffusion models without any additional training or architectural modifications. Extensive experiments across multiple diffusion backbones and segmentation benchmarks demonstrate that VGD significantly boosts downstream segmentation performance while maintaining visual fidelity. Our findings highlight a task-aware sampling principle with potential to underpin future synthetic segmentation pipelines.Downloads
Published
2026-03-14
How to Cite
Zhang, H., Chen, H., Yang, C., & Lyu, Y. (2026). VGD: Value-Guided Diffusion Toward High-Utility Medical Image Segmentation. Proceedings of the AAAI Conference on Artificial Intelligence, 40(15), 12520–12528. https://doi.org/10.1609/aaai.v40i15.38246
Issue
Section
AAAI Technical Track on Computer Vision XII