Weakly-Supervised Camouflaged Object Detection with Scribble Annotations

Authors

  • Ruozhen He City University of Hong Kong
  • Qihua Dong City University of Hong Kong
  • Jiaying Lin City University of Hong Kong
  • Rynson W.H. Lau City University of Hong Kong

DOI:

https://doi.org/10.1609/aaai.v37i1.25156

Keywords:

CV: Low Level & Physics-Based Vision, CV: Object Detection & Categorization

Abstract

Existing camouflaged object detection (COD) methods rely heavily on large-scale datasets with pixel-wise annotations. However, due to the ambiguous boundary, annotating camouflage objects pixel-wisely is very time-consuming and labor-intensive, taking ~60mins to label one image. In this paper, we propose the first weakly-supervised COD method, using scribble annotations as supervision. To achieve this, we first relabel 4,040 images in existing camouflaged object datasets with scribbles, which takes ~10s to label one image. As scribble annotations only describe the primary structure of objects without details, for the network to learn to localize the boundaries of camouflaged objects, we propose a novel consistency loss composed of two parts: a cross-view loss to attain reliable consistency over different images, and an inside-view loss to maintain consistency inside a single prediction map. Besides, we observe that humans use semantic information to segment regions near the boundaries of camouflaged objects. Hence, we further propose a feature-guided loss, which includes visual features directly extracted from images and semantically significant features captured by the model. Finally, we propose a novel network for COD via scribble learning on structural information and semantic relations. Our network has two novel modules: the local-context contrasted (LCC) module, which mimics visual inhibition to enhance image contrast/sharpness and expand the scribbles into potential camouflaged regions, and the logical semantic relation (LSR) module, which analyzes the semantic relation to determine the regions representing the camouflaged object. Experimental results show that our model outperforms relevant SOTA methods on three COD benchmarks with an average improvement of 11.0% on MAE, 3.2% on S-measure, 2.5% on E-measure, and 4.4% on weighted F-measure.

Downloads

Published

2023-06-26

How to Cite

He, R., Dong, Q., Lin, J., & W.H. Lau, R. (2023). Weakly-Supervised Camouflaged Object Detection with Scribble Annotations. Proceedings of the AAAI Conference on Artificial Intelligence, 37(1), 781-789. https://doi.org/10.1609/aaai.v37i1.25156

Issue

Section

AAAI Technical Track on Computer Vision I