Multi-Type Self-Attention Guided Degraded Saliency Detection

Authors

  • Ziqi Zhou Tianjin University
  • Zheng Wang Tianjin University
  • Huchuan Lu Dalian University of Technology
  • Song Wang University of South Carolina
  • Meijun Sun Tianjin University

DOI:

https://doi.org/10.1609/aaai.v34i07.7010

Abstract

Existing saliency detection techniques are sensitive to image quality and perform poorly on degraded images. In this paper, we systematically analyze the current status of the research on detecting salient objects from degraded images and then propose a new multi-type self-attention network, namely MSANet, for degraded saliency detection. The main contributions include: 1) Applying attention transfer learning to promote semantic detail perception and internal feature mining of the target network on degraded images; 2) Developing a multi-type self-attention mechanism to achieve the weight recalculation of multi-scale features. By computing global and local attention scores, we obtain the weighted features of different scales, effectively suppress the interference of noise and redundant information, and achieve a more complete boundary extraction. The proposed MSANet converts low-quality inputs to high-quality saliency maps directly in an end-to-end fashion. Experiments on seven widely-used datasets show that our approach produces good performance on both clear and degraded images.

Downloads

Published

2020-04-03

How to Cite

Zhou, Z., Wang, Z., Lu, H., Wang, S., & Sun, M. (2020). Multi-Type Self-Attention Guided Degraded Saliency Detection. Proceedings of the AAAI Conference on Artificial Intelligence, 34(07), 13082-13089. https://doi.org/10.1609/aaai.v34i07.7010

Issue

Section

AAAI Technical Track: Vision