Towards Perceptual Image Dehazing by Physics-Based Disentanglement and Adversarial Training

Authors

  • Xitong Yang University of Maryland, College Park
  • Zheng Xu University of Maryland, College Park
  • Jiebo Luo University of Rochester

DOI:

https://doi.org/10.1609/aaai.v32i1.12317

Keywords:

Image dehazing, Generative Adversarial Network

Abstract

Single image dehazing is a challenging under-constrained problem because of the ambiguities of unknown scene radiance and transmission. Previous methods solve this problem using various hand-designed priors or by supervised training on synthetic hazy image pairs. In practice, however, the predefined priors are easily violated and the paired image data is unavailable for supervised training. In this work, we propose Disentangled Dehazing Network, an end-to-end model that generates realistic haze-free images using only unpaired supervision. Our approach alleviates the paired training constraint by introducing a physical-model based disentanglement and reconstruction mechanism. A multi-scale adversarial training is employed to generate perceptually haze-free images. Experimental results on synthetic datasets demonstrate our superior performance compared with the state-of-the-art methods in terms of PSNR, SSIM and CIEDE2000. Through training on purely natural haze-free and hazy images from our collected HazyCity dataset, our model can generate more perceptually appealing dehazing results.

Downloads

Published

2018-04-27

How to Cite

Yang, X., Xu, Z., & Luo, J. (2018). Towards Perceptual Image Dehazing by Physics-Based Disentanglement and Adversarial Training. Proceedings of the AAAI Conference on Artificial Intelligence, 32(1). https://doi.org/10.1609/aaai.v32i1.12317