When AWGN-Based Denoiser Meets Real Noises

Authors

  • Yuqian Zhou UIUC
  • Jianbo Jiao University of Oxford
  • Haibin Huang Megvii
  • Yang Wang Stony Brook University
  • Jue Wang Megvii
  • Honghui Shi UIUC
  • Thomas Huang UIUC

DOI:

https://doi.org/10.1609/aaai.v34i07.7009

Abstract

Discriminative learning based image denoisers have achieved promising performance on synthetic noises such as Additive White Gaussian Noise (AWGN). The synthetic noises adopted in most previous work are pixel-independent, but real noises are mostly spatially/channel-correlated and spatially/channel-variant. This domain gap yields unsatisfied performance on images with real noises if the model is only trained with AWGN. In this paper, we propose a novel approach to boost the performance of a real image denoiser which is trained only with synthetic pixel-independent noise data dominated by AWGN. First, we train a deep model that consists of a noise estimator and a denoiser with mixed AWGN and Random Value Impulse Noise (RVIN). We then investigate Pixel-shuffle Down-sampling (PD) strategy to adapt the trained model to real noises. Extensive experiments demonstrate the effectiveness and generalization of the proposed approach. Notably, our method achieves state-of-the-art performance on real sRGB images in the DND benchmark among models trained with synthetic noises. Codes are available at https://github.com/yzhouas/PD-Denoising-pytorch.

Downloads

Published

2020-04-03

How to Cite

Zhou, Y., Jiao, J., Huang, H., Wang, Y., Wang, J., Shi, H., & Huang, T. (2020). When AWGN-Based Denoiser Meets Real Noises. Proceedings of the AAAI Conference on Artificial Intelligence, 34(07), 13074-13081. https://doi.org/10.1609/aaai.v34i07.7009

Issue

Section

AAAI Technical Track: Vision