MdaIF: Robust One-Stop Multi-Degradation-Aware Image Fusion with Language-Driven Semantics

Authors

  • Jing Li Key Laboratory of Geographic Information Science (Ministry of Education), East China Normal University, Shanghai 200241, China Key Laboratory of Spatial-temporal Big Data Analysis and Application of Natural Resources in Megacities, Ministry of Natural Resources, East China Normal University, Shanghai 200241, China
  • Yifan Wang School of Information, Central University of Finance and Economics, Beijing 102206, China
  • Jiafeng Yan School of Information, Central University of Finance and Economics, Beijing 102206, China
  • Renlong Zhang School of Information, Central University of Finance and Economics, Beijing 102206, China
  • Bin Yang School of Artificial Intelligence and Robotics, Hunan University, Changsha 410082, China

DOI:

https://doi.org/10.1609/aaai.v40i8.37548

Abstract

Infrared and visible image fusion aims to integrate complementary multi-modal information into a single fused result. However, existing methods 1) fail to account for the degradation visible images under adverse weather conditions, thereby compromising fusion performance; and 2) rely on fixed network architectures, limiting their adaptability to diverse degradation scenarios. To address these issues, we propose a one-stop degradation-aware image fusion framework for multi-degradation scenarios driven by a large language model (MdaIF). Given the distinct scattering characteristics of different degradation scenarios (e.g., haze, rain, and snow) in atmospheric transmission, a mixture-of-experts (MoE) system is introduced to tackle image fusion across multiple degradation scenarios. To adaptively extract diverse weather-aware degradation knowledge and scene feature representations, collectively referred to as the semantic prior, we employ a pre-trained vision-language model (VLM) in our framework. Guided by the semantic prior, we propose degradation-aware channel attention module (DCAM), which employ degradation prototype decomposition to facilitate multi-modal feature interaction in channel domain. In addition, to achieve effective expert routing, the semantic prior and channel-domain modulated features are utilized to guide the MoE, enabling robust image fusion in complex degradation scenarios. Extensive experiments validate the effectiveness of our MdaIF, demonstrating superior performance over SOTA methods.

Downloads

Published

2026-03-14

How to Cite

Li, J., Wang, Y., Yan, J., Zhang, R., & Yang, B. (2026). MdaIF: Robust One-Stop Multi-Degradation-Aware Image Fusion with Language-Driven Semantics. Proceedings of the AAAI Conference on Artificial Intelligence, 40(8), 6226–6234. https://doi.org/10.1609/aaai.v40i8.37548

Issue

Section

AAAI Technical Track on Computer Vision V