Unsupervised Multi-Exposure Image Fusion Breaking Exposure Limits via Contrastive Learning

Authors

  • Han Xu Wuhan University
  • Liang Haochen Wuhan University
  • Jiayi Ma Wuhan University

DOI:

https://doi.org/10.1609/aaai.v37i3.25404

Keywords:

CV: Computational Photography, Image & Video Synthesis, CV: Low Level & Physics-Based Vision, CV: Multi-modal Vision

Abstract

This paper proposes an unsupervised multi-exposure image fusion (MEF) method via contrastive learning, termed as MEF-CL. It breaks exposure limits and performance bottleneck faced by existing methods. MEF-CL firstly designs similarity constraints to preserve contents in source images. It eliminates the need for ground truth (actually not exist and created artificially) and thus avoids negative impacts of inappropriate ground truth on performance and generalization. Moreover, we explore a latent feature space and apply contrastive learning in this space to guide fused image to approximate normal-light samples and stay away from inappropriately exposed ones. In this way, characteristics of fused images (e.g., illumination, colors) can be further improved without being subject to source images. Therefore, MEF-CL is applicable to image pairs of any multiple exposures rather than a pair of under-exposed and over-exposed images mandated by existing methods. By alleviating dependence on source images, MEF-CL shows better generalization for various scenes. Consequently, our results exhibit appropriate illumination, detailed textures, and saturated colors. Qualitative, quantitative, and ablation experiments validate the superiority and generalization of MEF-CL. Our code is publicly available at https://github.com/hanna-xu/MEF-CL.

Downloads

Published

2023-06-26

How to Cite

Xu, H., Haochen, L., & Ma, J. (2023). Unsupervised Multi-Exposure Image Fusion Breaking Exposure Limits via Contrastive Learning. Proceedings of the AAAI Conference on Artificial Intelligence, 37(3), 3010-3017. https://doi.org/10.1609/aaai.v37i3.25404

Issue

Section

AAAI Technical Track on Computer Vision III