Improving Dynamic HDR Imaging with Fusion Transformer

Authors

  • Rufeng Chen Hangzhou Dianzi University
  • Bolun Zheng Hangzhou Dianzi Universiy
  • Hua Zhang Hangzhou Dianzi University
  • Quan Chen Hangzhou Dianzi University
  • Chenggang Yan Hangzhou Dianzi University
  • Gregory Slabaugh Queen Mary University of London
  • Shanxin Yuan Queen Mary University of London

DOI:

https://doi.org/10.1609/aaai.v37i1.25107

Keywords:

CV: Computational Photography, Image & Video Synthesis

Abstract

Reconstructing a High Dynamic Range (HDR) image from several Low Dynamic Range (LDR) images with different exposures is a challenging task, especially in the presence of camera and object motion. Though existing models using convolutional neural networks (CNNs) have made great progress, challenges still exist, e.g., ghosting artifacts. Transformers, originating from the field of natural language processing, have shown success in computer vision tasks, due to their ability to address a large receptive field even within a single layer. In this paper, we propose a transformer model for HDR imaging. Our pipeline includes three steps: alignment, fusion, and reconstruction. The key component is the HDR transformer module. Through experiments and ablation studies, we demonstrate that our model outperforms the state-of-the-art by large margins on several popular public datasets.

Downloads

Published

2023-06-26

How to Cite

Chen, R., Zheng, B., Zhang, H., Chen, Q., Yan, C., Slabaugh, G., & Yuan, S. (2023). Improving Dynamic HDR Imaging with Fusion Transformer. Proceedings of the AAAI Conference on Artificial Intelligence, 37(1), 340-349. https://doi.org/10.1609/aaai.v37i1.25107

Issue

Section

AAAI Technical Track on Computer Vision I