VOILA: Complexity-Aware Universal Segmentation of CT Images by Voxel Interacting with Language

Authors

  • Zishuo Wan School of Automation and Electrical Engineering, University of Science and Technology Beijing
  • Yu Gao School of Automation and Electrical Engineering, University of Science and Technology Beijing
  • Wanyuan Pang School of Automation and Electrical Engineering, University of Science and Technology Beijing
  • Dawei Ding School of Automation and Electrical Engineering, University of Science and Technology Beijing Key Laboratory of Knowledge Automation for Industrial Processes, Ministry of Education

DOI:

https://doi.org/10.1609/aaai.v39i7.32805

Abstract

Satisfactory progress has been achieved recently in universal segmentation of CT images. Following the success of vision-language methods, there is a growing trend towards utilizing text prompts and contrastive learning to develop universal segmentation models. However, there exists a significant imbalance in information density between 3D images and text prompts. Moreover, the standard fully connected layer segmentation approach faces significant challenges with handling multiple classes and exhibits poor generalizability. To address these challenges, we propose VOxel Interacting with LAnguage method (VOILA) for universal CT image segmentation. Initially, we align voxels and language into a shared representation space and classify voxels based on cosine similarity. Subsequently, we develop the Voxel-Language Interaction framework to mitigate the impact of class imbalance caused by foreground-background discrepancies and variations in target volumes. Furthermore, a Complexity-Aware Sampling method is proposed to focus on region hard to segment, achieved by generating pseudo heatmaps from a trainable Gaussian mixture distribution. Our results indicate the proposed VOILA is capable to achieve improved performance with reduced parameters and computational cost during training. Furthermore, it demonstrates significant generalizability across diverse datasets without additional fine-tuning.

Downloads

Published

2025-04-11

How to Cite

Wan, Z., Gao, Y., Pang, W., & Ding, D. (2025). VOILA: Complexity-Aware Universal Segmentation of CT Images by Voxel Interacting with Language. Proceedings of the AAAI Conference on Artificial Intelligence, 39(7), 7482–7490. https://doi.org/10.1609/aaai.v39i7.32805

Issue

Section

AAAI Technical Track on Computer Vision VI