Forget What Has Seen: Selective Concept Unlearning in Segmentation Foundation Models
DOI:
https://doi.org/10.1609/aaai.v40i25.39233Abstract
Machine unlearning (MU) has emerged as a critical tool for removing sensitive or personal information from machine learning models, empowering individuals with the right to be forgotten. While MU has achieved success in classification and generative tasks, whether this technique can be effectively applied to segmentation foundation models remains uncertain. To address this issue, we propose an efficient method, Selective Concept Unlearning (SCU), to unlearn the segmentation capability of target concepts. SCU consists of several key aspects: (1) The Multi-level Forgetting Module, designed with a hierarchical three-level suppression strategy, including (i) distillation-level: Negative distillation steers model’s output distribution away from teacher’s correct outputs, erasing its learned concept recognition. (ii) attention-level: Attention suppression minimizes model’s attention to target regions. (iii) output-level: Directly erases predictions for the target by relabeling as background. (2) The Preservation Module ensures maintaining segmentation quality for non-target concepts. Additionally, we introduce a set of metrics to evaluate segmentation unlearning methods. Experiments demonstrate that SCU consistently outperforms existing baselines.Published
2026-03-14
How to Cite
Du, M., Li, J., Pan, S., Zhan, Y., Qi, G., Zhang, Y., … Wei, Q. (2026). Forget What Has Seen: Selective Concept Unlearning in Segmentation Foundation Models. Proceedings of the AAAI Conference on Artificial Intelligence, 40(25), 20923–20931. https://doi.org/10.1609/aaai.v40i25.39233
Issue
Section
AAAI Technical Track on Machine Learning II