ShadowFormer: Global Context Helps Shadow Removal

Authors

  • Lanqing Guo Nanyang Technological University
  • Siyu Huang Harvard University
  • Ding Liu Bytedance
  • Hao Cheng Nanyang Technological University
  • Bihan Wen Nanyang Technological University

DOI:

https://doi.org/10.1609/aaai.v37i1.25148

Keywords:

CV: Low Level & Physics-Based Vision

Abstract

Recent deep learning methods have achieved promising results in image shadow removal. However, most of the existing approaches focus on working locally within shadow and non-shadow regions, resulting in severe artifacts around the shadow boundaries as well as inconsistent illumination between shadow and non-shadow regions. It is still challenging for the deep shadow removal model to exploit the global contextual correlation between shadow and non-shadow regions. In this work, we first propose a Retinex-based shadow model, from which we derive a novel transformer-based network, dubbed ShandowFormer, to exploit non-shadow regions to help shadow region restoration. A multi-scale channel attention framework is employed to hierarchically capture the global information. Based on that, we propose a Shadow-Interaction Module (SIM) with Shadow-Interaction Attention (SIA) in the bottleneck stage to effectively model the context correlation between shadow and non-shadow regions. We conduct extensive experiments on three popular public datasets, including ISTD, ISTD+, and SRD, to evaluate the proposed method. Our method achieves state-of-the-art performance by using up to 150X fewer model parameters.

Downloads

Published

2023-06-26

How to Cite

Guo, L., Huang, S., Liu, D., Cheng, H., & Wen, B. (2023). ShadowFormer: Global Context Helps Shadow Removal. Proceedings of the AAAI Conference on Artificial Intelligence, 37(1), 710-718. https://doi.org/10.1609/aaai.v37i1.25148

Issue

Section

AAAI Technical Track on Computer Vision I