Bidirectional RNN-based Few Shot Learning for 3D Medical Image Segmentation

Authors

  • Soopil Kim DGIST
  • Sion An DGIST
  • Philip Chikontwe DGIST
  • Sang Hyun Park DGIST

DOI:

https://doi.org/10.1609/aaai.v35i3.16275

Keywords:

Segmentation, Transfer/Adaptation/Multi-task/Meta/Automated Learning

Abstract

Segmentation of organs of interest in 3D medical images is necessary for accurate diagnosis and longitudinal studies. Though recent advances using deep learning have shown success for many segmentation tasks, large datasets are required for high performance and the annotation process is both time consuming and labor intensive. In this paper, we propose a 3D few shot segmentation framework for accurate organ segmentation using limited training samples of the target organ annotation. To achieve this, a U-Net like network is designed to predict segmentation by learning the relationship between 2D slices of support data and a query image, including a bidirectional gated recurrent unit (GRU) that learns consistency of encoded features between adjacent slices. Also, we introduce a transfer learning method to adapt the characteristics of the target image and organ by updating the model before testing with arbitrary support and query data sampled from the support data. We evaluate our proposed model using three 3D CT datasets with annotations of different organs. Our model yielded significantly improved performance over state-of-the-art few shot segmentation models and was comparable to a fully supervised model trained with more target training data.

Downloads

Published

2021-05-18

How to Cite

Kim, S., An, S., Chikontwe, P., & Park, S. H. (2021). Bidirectional RNN-based Few Shot Learning for 3D Medical Image Segmentation. Proceedings of the AAAI Conference on Artificial Intelligence, 35(3), 1808-1816. https://doi.org/10.1609/aaai.v35i3.16275

Issue

Section

AAAI Technical Track on Computer Vision II