Joint Demosaicking and Denoising in the Wild: The Case of Training Under Ground Truth Uncertainty

Authors

  • Jierun Chen The Hong Kong University of Science and Technology
  • Song Wen The Hong Kong University of Science and Technology
  • S.-H. Gary Chan The Hong Kong University of Science and Technology

DOI:

https://doi.org/10.1609/aaai.v35i2.16186

Keywords:

Low Level & Physics-based Vision

Abstract

Image demosaicking and denoising are the two key fundamental steps in digital camera pipelines, aiming to reconstruct clean color images from noisy luminance readings. In this paper, we propose and study Wild-JDD, a novel learning framework for joint demosaicking and denoising in the wild. In contrast to previous works which generally assume the ground truth of training data is a perfect reflection of the reality, we consider here the more common imperfect case of ground truth uncertainty in the wild. We first illustrate its manifestation as various kinds of artifacts including zipper effect, color moire and residual noise. Then we formulate a two-stage data degradation process to capture such ground truth uncertainty, where a conjugate prior distribution is imposed upon a base distribution. After that, we derive an evidence lower bound (ELBO) loss to train a neural network that approximates the parameters of the conjugate prior distribution conditioned on the degraded input. Finally, to further enhance the performance for out-of-distribution input, we design a simple but effective fine-tuning strategy by taking the input as a weakly informative prior. Taking into account ground truth uncertainty, Wild-JDD enjoys good interpretability during optimization. Extensive experiments validate that it outperforms state-of-the-art schemes on joint demosaicking and denoising tasks on both synthetic and realistic raw datasets.

Downloads

Published

2021-05-18

How to Cite

Chen, J., Wen, S., & Chan, S.-H. G. (2021). Joint Demosaicking and Denoising in the Wild: The Case of Training Under Ground Truth Uncertainty. Proceedings of the AAAI Conference on Artificial Intelligence, 35(2), 1018-1026. https://doi.org/10.1609/aaai.v35i2.16186

Issue

Section

AAAI Technical Track on Computer Vision I