Split then Refine: Stacked Attention-guided ResUNets for Blind Single Image Visible Watermark Removal


  • Xiaodong Cun University of Macau
  • Chi-Man Pun University of Macau




Computational Photography, Image & Video Synthesis, Scene Analysis & Understanding


Digital watermark is a commonly used technique to protect the copyright of medias. Simultaneously, to increase the robustness of watermark, attacking technique, such as watermark removal, also gets the attention from the community. Previous watermark removal methods require to gain the watermark location from users or train a multi-task network to recover the background indiscriminately. However, when jointly learning, the network performs better on watermark detection than recovering the texture. Inspired by this observation and to erase the visible watermarks blindly, we propose a novel two-stage framework with a stacked attention-guided ResUNets to simulate the process of detection, removal and refinement. In the first stage, we design a multi-task network called SplitNet. It learns the basis features for three sub-tasks altogether while the task-specific features separately use multiple channel attentions. Then, with the predicted mask and coarser restored image, we design RefineNet to smooth the watermarked region with a mask-guided spatial attention. Besides network structure, the proposed algorithm also combines multiple perceptual losses for better quality both visually and numerically. We extensively evaluate our algorithm over four different datasets under various settings and the experiments show that our approach outperforms other state-of-the-art methods by a large margin.




How to Cite

Cun, X., & Pun, C.-M. (2021). Split then Refine: Stacked Attention-guided ResUNets for Blind Single Image Visible Watermark Removal. Proceedings of the AAAI Conference on Artificial Intelligence, 35(2), 1184-1192. https://doi.org/10.1609/aaai.v35i2.16205



AAAI Technical Track on Computer Vision I