FusionDN: A Unified Densely Connected Network for Image Fusion

Authors

  • Han Xu Wuhan University
  • Jiayi Ma Wuhan University
  • Zhuliang Le Wuhan University
  • Junjun Jiang Harbin Institute of Technology
  • Xiaojie Guo Tianjin University

DOI:

https://doi.org/10.1609/aaai.v34i07.6936

Abstract

In this paper, we present a new unsupervised and unified densely connected network for different types of image fusion tasks, termed as FusionDN. In our method, the densely connected network is trained to generate the fused image conditioned on source images. Meanwhile, a weight block is applied to obtain two data-driven weights as the retention degrees of features in different source images, which are the measurement of the quality and the amount of information in them. Losses of similarities based on these weights are applied for unsupervised learning. In addition, we obtain a single model applicable to multiple fusion tasks by applying elastic weight consolidation to avoid forgetting what has been learned from previous tasks when training multiple tasks sequentially, rather than train individual models for every fusion task or jointly train tasks roughly. Qualitative and quantitative results demonstrate the advantages of FusionDN compared with state-of-the-art methods in different fusion tasks.

Downloads

Published

2020-04-03

How to Cite

Xu, H., Ma, J., Le, Z., Jiang, J., & Guo, X. (2020). FusionDN: A Unified Densely Connected Network for Image Fusion. Proceedings of the AAAI Conference on Artificial Intelligence, 34(07), 12484-12491. https://doi.org/10.1609/aaai.v34i07.6936

Issue

Section

AAAI Technical Track: Vision