Pyramidal Feature Shrinking for Salient Object Detection
DOI:
https://doi.org/10.1609/aaai.v35i3.16331Keywords:
SegmentationAbstract
Recently, we have witnessed the great progress of salient object detection (SOD), which benefits from the effectiveness of various feature aggregation strategies. However, existing methods usually aggregate the low-level features containing details and the high-level features containing semantics over a large span, which introduces noise into the aggregated features and generate inaccurate saliency map. To address this issue, we propose pyramidal feature shrinking network (PFSNet), which aims to aggregate adjacent feature nodes in pairs with layer-by-layer shrinkage, so that the aggregated features fuse effective details and semantics together and discard interference information. Specifically, pyramidal shrinking decoder (PSD) is proposed to aggregate adjacent features hierarchically in an asymptotic manner. Unlike other methods that aggregate features with significantly different information, this method only focuses on adjacent feature nodes in each layer and shrinks them to a final unique feature node. Besides, we propose adjacent fusion module (AFM) to perform mutual spatial enhancement between the adjacent features so as to dynamically weight the features and adaptively fuse the appropriate information. In addition, scale-aware enrichment module (SEM) based on the features extracted from backbone is utilized to obtain rich scale information and generate diverse initial features with dilated convolutions. Extensive quantitative and qualitative experiments demonstrate that the proposed intuitive framework outperforms 14 state-of-the-art approaches on 5 public datasets.Downloads
Published
2021-05-18
How to Cite
Ma, M., Xia, C., & Li, J. (2021). Pyramidal Feature Shrinking for Salient Object Detection. Proceedings of the AAAI Conference on Artificial Intelligence, 35(3), 2311-2318. https://doi.org/10.1609/aaai.v35i3.16331
Issue
Section
AAAI Technical Track on Computer Vision II