Joint Demosaicking and Denoising in the Wild: The Case of Training Under Ground Truth Uncertainty
Keywords:Low Level & Physics-based Vision
AbstractImage demosaicking and denoising are the two key fundamental steps in digital camera pipelines, aiming to reconstruct clean color images from noisy luminance readings. In this paper, we propose and study Wild-JDD, a novel learning framework for joint demosaicking and denoising in the wild. In contrast to previous works which generally assume the ground truth of training data is a perfect reflection of the reality, we consider here the more common imperfect case of ground truth uncertainty in the wild. We first illustrate its manifestation as various kinds of artifacts including zipper effect, color moire and residual noise. Then we formulate a two-stage data degradation process to capture such ground truth uncertainty, where a conjugate prior distribution is imposed upon a base distribution. After that, we derive an evidence lower bound (ELBO) loss to train a neural network that approximates the parameters of the conjugate prior distribution conditioned on the degraded input. Finally, to further enhance the performance for out-of-distribution input, we design a simple but effective fine-tuning strategy by taking the input as a weakly informative prior. Taking into account ground truth uncertainty, Wild-JDD enjoys good interpretability during optimization. Extensive experiments validate that it outperforms state-of-the-art schemes on joint demosaicking and denoising tasks on both synthetic and realistic raw datasets.
How to Cite
Chen, J., Wen, S., & Chan, S.-H. G. (2021). Joint Demosaicking and Denoising in the Wild: The Case of Training Under Ground Truth Uncertainty. Proceedings of the AAAI Conference on Artificial Intelligence, 35(2), 1018-1026. Retrieved from https://ojs.aaai.org/index.php/AAAI/article/view/16186
AAAI Technical Track on Computer Vision I