Self-Supervised Pretraining for RGB-D Salient Object Detection

Authors

  • Xiaoqi Zhao Dalian University of Technology
  • Youwei Pang Dalian University of Technology
  • Lihe Zhang Dalian University of Technology
  • Huchuan Lu Dalian University of Technology Peng Cheng Laboratory
  • Xiang Ruan Tiwaki Co.Ltd.

DOI:

https://doi.org/10.1609/aaai.v36i3.20257

Keywords:

Computer Vision (CV)

Abstract

Existing CNNs-Based RGB-D salient object detection (SOD) networks are all required to be pretrained on the ImageNet to learn the hierarchy features which helps provide a good initialization. However, the collection and annotation of large-scale datasets are time-consuming and expensive. In this paper, we utilize self-supervised representation learning (SSL) to design two pretext tasks: the cross-modal auto-encoder and the depth-contour estimation. Our pretext tasks require only a few and unlabeled RGB-D datasets to perform pretraining, which makes the network capture rich semantic contexts and reduce the gap between two modalities, thereby providing an effective initialization for the downstream task. In addition, for the inherent problem of cross-modal fusion in RGB-D SOD, we propose a consistency-difference aggregation (CDA) module that splits a single feature fusion into multi-path fusion to achieve an adequate perception of consistent and differential information. The CDA module is general and suitable for cross-modal and cross-level feature fusion. Extensive experiments on six benchmark datasets show that our self-supervised pretrained model performs favorably against most state-of-the-art methods pretrained on ImageNet. The source code will be publicly available at https://github.com/Xiaoqi-Zhao-DLUT/SSLSOD.

Downloads

Published

2022-06-28

How to Cite

Zhao, X., Pang, Y., Zhang, L., Lu, H., & Ruan, X. (2022). Self-Supervised Pretraining for RGB-D Salient Object Detection. Proceedings of the AAAI Conference on Artificial Intelligence, 36(3), 3463-3471. https://doi.org/10.1609/aaai.v36i3.20257

Issue

Section

AAAI Technical Track on Computer Vision III