Multi-Focus Image Fusion via Explicit Defocus Blur Modelling

Authors

  • Yuhui Quan South China University of Technology
  • Xi Wan South China University of Technology
  • Zitao Tang South China University of Technology
  • Jinxiu Liang Peking University
  • Hui Ji National University of Singapore

DOI:

https://doi.org/10.1609/aaai.v39i6.32714

Abstract

Multi-focus image fusion (MFIF) enhances depth of field in photography by generating an all-in-focus image from multiple images captured at different focal lengths. While deep learning has shown promise in MFIF, most existing methods overlooked the physical properties of defocus blurring in their network design, limiting their interoperability and generalization. This paper introduces a novel framework that integrates explicit defocus blur modelling into the MFIF process, improving both interpretability and performance. Using an atom-based spatially-varying parameterized defocus blurring model, our approach calculates pixel-wise defocus descriptors and initial focused images from multi-focus source images in a scale-recurrent manner to estimate soft decision maps. Fusion is then performed using masks derived from these decision maps, with special treatment for pixels likely defocused in all source images or near boundaries of defocused/focused regions. The model is trained with a fusion loss and a cross-scale defocus estimation loss. Extensive experiments on benchmark datasets demonstrated the effectiveness of our approach.

Downloads

Published

2025-04-11

How to Cite

Quan, Y., Wan, X., Tang, Z., Liang, J., & Ji, H. (2025). Multi-Focus Image Fusion via Explicit Defocus Blur Modelling. Proceedings of the AAAI Conference on Artificial Intelligence, 39(6), 6657–6665. https://doi.org/10.1609/aaai.v39i6.32714

Issue

Section

AAAI Technical Track on Computer Vision V