A Continual Learning Framework for Uncertainty-Aware Interactive Image Segmentation
DOI:
https://doi.org/10.1609/aaai.v35i7.16752Keywords:
Human-in-the-loop Machine LearningAbstract
Deep learning models have achieved state-of-the-art performance in semantic image segmentation, but the results provided by fully automatic algorithms are not always guaranteed satisfactory to users. Interactive segmentation offers a solution by accepting user annotations on selective areas of the images to refine the segmentation results. However, most existing models only focus on correcting the current image's misclassified pixels, with no knowledge carried over to other images. In this work, we formulate interactive image segmentation as a continual learning problem and propose a framework to effectively learn from user annotations, aiming to improve the segmentation on both the current image and unseen images in future tasks while avoiding deteriorated performance on previously-seen images. It employs a probabilistic mask to control the neural network's kernel activation and extract the most suitable features for segmenting images in each task. We also apply a task-aware embedding to automatically infer the optimal kernel activation for initial segmentation and subsequent refinement. Interactions with users are guided through multi-source uncertainty estimation so that users can focus on the most important areas to minimize the overall manual annotation effort. Experiments are performed on both medical and natural image datasets to illustrate the proposed framework's effectiveness on basic segmentation performance, forward knowledge transfer, and backward knowledge transfer.Downloads
Published
2021-05-18
How to Cite
Zheng, E., Yu, Q., Li, R., Shi, P., & Haake, A. (2021). A Continual Learning Framework for Uncertainty-Aware Interactive Image Segmentation. Proceedings of the AAAI Conference on Artificial Intelligence, 35(7), 6030-6038. https://doi.org/10.1609/aaai.v35i7.16752
Issue
Section
AAAI Technical Track on Humans and AI