Towards a Comprehensive, Efficient and Promptable Anatomic Structure Segmentation Model Using 3D Whole-Body CT Scans

Authors

  • Heng Guo DAMO Academy, Alibaba Group Hupan Lab, 310023, Hangzhou, China
  • Jianfeng Zhang DAMO Academy, Alibaba Group Hupan Lab, 310023, Hangzhou, China
  • Jiaxing Huang DAMO Academy, Alibaba Group
  • Tony C. W. Mok DAMO Academy, Alibaba Group Hupan Lab, 310023, Hangzhou, China
  • Dazhou Guo DAMO Academy, Alibaba Group
  • Ke Yan DAMO Academy, Alibaba Group Hupan Lab, 310023, Hangzhou, China
  • Le Lu DAMO Academy, Alibaba Group
  • Dakai Jin DAMO Academy, Alibaba Group
  • Minfeng Xu DAMO Academy, Alibaba Group Hupan Lab, 310023, Hangzhou, China

DOI:

https://doi.org/10.1609/aaai.v39i3.32335

Abstract

Segment anything model (SAM) demonstrates strong generalization ability on natural image segmentation. However, its direct adaptation in medical image segmentation tasks shows significant performance drops. It also requires an excessive number of prompt points to obtain a reasonable accuracy. Although quite a few studies explore adapting SAM into medical image volumes, the efficiency of 2D adaptation methods is unsatisfactory and 3D adaptation methods are only capable of segmenting specific organs/tumors. In this work, we propose a comprehensive and scalable 3D SAM model for whole-body CT segmentation, named CT-SAM3D. Instead of adapting SAM, we propose a 3D promptable segmentation model using a (nearly) fully labeled CT dataset. To train CT-SAM3D effectively, ensuring the model's accurate responses to higher-dimensional spatial prompts is crucial, and 3D patch-wise training is required due to GPU memory constraints. Therefore, we propose two key technical developments: 1) a progressively and spatially aligned prompt encoding method to effectively encode click prompts in local 3D space; and 2) a cross-patch prompt scheme to capture more 3D spatial context, which is beneficial for reducing the editing workloads when interactively prompting on large organs. CT-SAM3D is trained using a curated dataset of 1204 CT scans containing 107 whole-body anatomies and extensively validated using five datasets, achieving significantly better results against all previous SAM-derived models.

Downloads

Published

2025-04-11

How to Cite

Guo, H., Zhang, J., Huang, J., Mok, T. C. W., Guo, D., Yan, K., … Xu, M. (2025). Towards a Comprehensive, Efficient and Promptable Anatomic Structure Segmentation Model Using 3D Whole-Body CT Scans. Proceedings of the AAAI Conference on Artificial Intelligence, 39(3), 3247–3256. https://doi.org/10.1609/aaai.v39i3.32335

Issue

Section

AAAI Technical Track on Computer Vision II