Make Foundation Models Trustworthy Again: Causal Fine-Adaptation for Medical Image Segmentation
DOI:
https://doi.org/10.1609/aaai.v40i32.39970Abstract
Vision foundation models (e.g., SAM2, CLIP) show strong generalization in natural image analysis but degrade significantly in specialized domains like medical imaging. This is critical for tasks such as brain tumor segmentation, where errors directly affect surgical planning and patient outcomes. In such contexts, segmentation must be highly reliable and structurally precise, underscoring the need for adaptable methods with low error tolerance. While fine-tuning is the dominant strategy, it is computationally expensive and prone to forgetting. To address this, we propose CausalBridgeNet, a causality-guided correction framework for medical image segmentation. Inspired by predictive coding theories of the Bayesian brain, our method introduces a Predictive Causal Reasoning Unit (PCRU) that estimates structured error maps and delivers targeted feedback to iteratively refine predictions. This forms a closed-loop, error-aware correction mechanism without modifying the foundation model. By keeping the backbone frozen, CausalBridgeNet preserves general visual priors while enhancing task-specific accuracy. On the BraTS 2025 benchmark, it achieves an average Dice score of 84.48 and HD95 of 5.48 across tumor subregions, demonstrating its effectiveness for high-precision medical segmentation.Downloads
Published
2026-03-14
How to Cite
Yang, H., Chen, Y., Ma, S., & Guo, F. (2026). Make Foundation Models Trustworthy Again: Causal Fine-Adaptation for Medical Image Segmentation. Proceedings of the AAAI Conference on Artificial Intelligence, 40(32), 27512–27520. https://doi.org/10.1609/aaai.v40i32.39970
Issue
Section
AAAI Technical Track on Machine Learning IX