EMEF: Ensemble Multi-Exposure Image Fusion

Authors

  • Renshuai Liu School of Informatics, Xiamen Univeristy
  • Chengyang Li School of Informatics, Xiamen University
  • Haitao Cao School of Informatics, Xiamen University
  • Yinglin Zheng School of Informatics, Xiamen University
  • Ming Zeng School of Informatics, Xiamen University
  • Xuan Cheng School of Informatics, Xiamen Univeristy

DOI:

https://doi.org/10.1609/aaai.v37i2.25259

Keywords:

CV: Computational Photography, Image & Video Synthesis

Abstract

Although remarkable progress has been made in recent years, current multi-exposure image fusion (MEF) research is still bounded by the lack of real ground truth, objective evaluation function, and robust fusion strategy. In this paper, we study the MEF problem from a new perspective. We don’t utilize any synthesized ground truth, design any loss function, or develop any fusion strategy. Our proposed method EMEF takes advantage of the wisdom of multiple imperfect MEF contributors including both conventional and deep learning-based methods. Specifically, EMEF consists of two main stages: pre-train an imitator network and tune the imitator in the runtime. In the first stage, we make a unified network imitate different MEF targets in a style modulation way. In the second stage, we tune the imitator network by optimizing the style code, in order to find an optimal fusion result for each input pair. In the experiment, we construct EMEF from four state-of-the-art MEF methods and then make comparisons with the individuals and several other competitive methods on the latest released MEF benchmark dataset. The promising experimental results demonstrate that our ensemble framework can “get the best of all worlds”. The code is available at https://github.com/medalwill/EMEF.

Downloads

Published

2023-06-26

How to Cite

Liu, R., Li, C., Cao, H., Zheng, Y., Zeng, M., & Cheng, X. (2023). EMEF: Ensemble Multi-Exposure Image Fusion. Proceedings of the AAAI Conference on Artificial Intelligence, 37(2), 1710-1718. https://doi.org/10.1609/aaai.v37i2.25259

Issue

Section

AAAI Technical Track on Computer Vision II