FMRNet: Image Deraining via Frequency Mutual Revision
DOI:
https://doi.org/10.1609/aaai.v38i11.29186Keywords:
ML: Transfer, Domain Adaptation, Multi-Task Learning, CV: Computational Photography, Image & Video Synthesis, CV: Low Level & Physics-based Vision, ML: Deep Learning Algorithms, ML: Transparent, Interpretable, Explainable ML, ML: Unsupervised & Self-Supervised LearningAbstract
The wavelet transform has emerged as a powerful tool in deciphering structural information within images. And now, the latest research suggests that combining the prowess of wavelet transform with neural networks can lead to unparalleled image deraining results. By harnessing the strengths of both the spatial domain and frequency space, this innovative approach is poised to revolutionize the field of image processing. The fascinating challenge of developing a comprehensive framework that takes into account the intrinsic frequency property and the correlation between rain residue and background is yet to be fully explored. In this work, we propose to investigate the potential relationships among rain-free and residue components at the frequency domain, forming a frequency mutual revision network (FMRNet) for image deraining. Specifically, we explore the mutual representation of rain residue and background components at frequency domain, so as to better separate the rain layer from clean background while preserving structural textures of the degraded images. Meanwhile, the rain distribution prediction from the low-frequency coefficient, which can be seen as the degradation prior is used to refine the separation of rain residue and background components. Inversely, the updated rain residue is used to benefit the low-frequency rain distribution prediction, forming the multi-layer mutual learning. Extensive experiments demonstrate that our proposed FMRNet delivers significant performance gains for seven datasets on image deraining task, surpassing the state-of-the-art method ELFormer by 1.14 dB in PSNR on the Rain100L dataset, while with similar computation cost. Code and retrained models are available at https://github.com/kuijiang94/FMRNet.Downloads
Published
2024-03-24
How to Cite
Jiang, K., Jiang, J., Liu, X., Xu, X., & Ma, X. (2024). FMRNet: Image Deraining via Frequency Mutual Revision. Proceedings of the AAAI Conference on Artificial Intelligence, 38(11), 12892-12900. https://doi.org/10.1609/aaai.v38i11.29186
Issue
Section
AAAI Technical Track on Machine Learning II