Diffusion Once and Done: Degradation-Aware LoRA for All-in-One Image Restoration

Authors

  • Ni Tang School of Informatics, Xiamen University
  • Xiaotong Luo The Hong Kong Polytechnic University
  • Zihan Cheng School of Informatics, Xiamen University
  • Liangtai Zhou School of Informatics, Xiamen University
  • Dongxiao Zhang Jimei University
  • Yanyun Qu School of Informatics, Xiamen University Key Laboratory of Multimedia Trusted Perception and Efficient Computing, Ministry of Education of China

DOI:

https://doi.org/10.1609/aaai.v40i11.37905

Abstract

Diffusion models have revealed powerful potential in all-in-one image restoration (AiOIR), which is talented in generating abundant texture details. The existing AiOIR methods either retrain a diffusion model or fine-tune the pretrained diffusion model with extra conditional guidance. However, they often suffer from high inference costs and limited adaptability to diverse degradation types. In this paper, we propose an efficient AiOIR method, Diffusion Once and Done (DOD), which aims to achieve superior restoration performance with only one-step sampling of Stable Diffusion (SD) models. Specifically, multi-degradation feature modulation is first introduced to capture different degradation prompts with a pretrained diffusion model. Then, parameter-efficient conditional low-rank adaptation integrates the prompts to enable the fine-tuning of the SD model for adapting to different degradation types. Besides, a high-fidelity detail enhancement module is integrated into the decoder of SD to improve structural and textural details. Experiments demonstrate that our method outperforms existing diffusion-based restoration approaches in both visual quality and inference efficiency.

Downloads

Published

2026-03-14

How to Cite

Tang, N., Luo, X., Cheng, Z., Zhou, L., Zhang, D., & Qu, Y. (2026). Diffusion Once and Done: Degradation-Aware LoRA for All-in-One Image Restoration. Proceedings of the AAAI Conference on Artificial Intelligence, 40(11), 9448–9456. https://doi.org/10.1609/aaai.v40i11.37905

Issue

Section

AAAI Technical Track on Computer Vision VIII