Training Matting Models Without Alpha Labels
DOI:
https://doi.org/10.1609/aaai.v39i6.32597Abstract
The labeling difficulty has been a longstanding problem in deep image matting. To escape from fine labels, this work explores using rough annotations such as trimaps coarsely indicating the foreground/background as supervision. We present that the cooperation between learned semantics from indicated known regions and proper assumed matting rules can help infer alpha values at transition areas. Inspired by the nonlocal principle in traditional image matting, we build a directional distance consistency loss (DDC loss) at each pixel neighborhood to constrain the alpha values conditioned on the input image. DDC loss forces the distance of similar pairs on the alpha matte and on its corresponding image to be consistent. In this way, the alpha values can be propagated from learned known regions to unknown transition areas. With only images and trimaps, a matting model can be trained under the supervision of a known loss and the proposed DDC loss. Experiments on AM-2K and P3M-10K dataset show that our paradigm achieves comparable performance with the fine-label-supervised baseline, while sometimes offers even more satisfying results than human-labeled ground truth.Downloads
Published
2025-04-11
How to Cite
Liu, W., Ye, Z., Lu, H., Cao, Z., & Yue, X. (2025). Training Matting Models Without Alpha Labels. Proceedings of the AAAI Conference on Artificial Intelligence, 39(6), 5604–5612. https://doi.org/10.1609/aaai.v39i6.32597
Issue
Section
AAAI Technical Track on Computer Vision V