AGLLDiff: Guiding Diffusion Models Towards Unsupervised Training-free Real-world Low-light Image Enhancement

Authors

  • Yunlong Lin Key Laboratory of Multimedia Trusted Perception and Efficient Computing, Ministry of Education of China, Xiamen University, China School of Informatics,Xiamen University, China
  • Tian Ye The Hong Kong University of Science and Technology (Guangzhou), China
  • Sixiang Chen The Hong Kong University of Science and Technology (Guangzhou), China
  • Zhenqi Fu Tsinghua University, China
  • Yingying Wang Key Laboratory of Multimedia Trusted Perception and Efficient Computing, Ministry of Education of China, Xiamen University, China School of Informatics,Xiamen University, China
  • Wenhao Chai University of Washington, USA
  • Zhaohu Xing The Hong Kong University of Science and Technology (Guangzhou), China
  • Wenxue Li The Hong Kong University of Science and Technology (Guangzhou), China
  • Lei Zhu The Hong Kong University of Science and Technology (Guangzhou), China The Hong Kong University of Science and Technology, Hong Kong SAR, China
  • Xinghao Ding Key Laboratory of Multimedia Trusted Perception and Efficient Computing, Ministry of Education of China, Xiamen University, China School of Informatics,Xiamen University, China

DOI:

https://doi.org/10.1609/aaai.v39i5.32564

Abstract

Existing low-light image enhancement (LIE) methods have achieved noteworthy success in solving synthetic distortions, yet they often fall short in practical applications. The limitations arise from two inherent challenges in real-world LIE: 1) the collection of distorted/clean image pairs is often impractical and sometimes even unavailable, and 2) accurately modeling complex degradations presents a non-trivial problem. To overcome them, we propose the Attribute Guidance Diffusion framework (AGLLDiff), a training-free method for effective real-world LIE. Instead of specifically defining the degradation process, AGLLDiff shifts the paradigm and models the desired attributes, such as image exposure, structure and color of normal-light images. These attributes are readily available and impose no assumptions about the degradation process, which guides the diffusion sampling process to a reliable high-quality solution space. Extensive experiments demonstrate that our approach outperforms the current leading unsupervised LIE methods across benchmarks in terms of distortion-based and perceptual-based metrics, and it performs well even in sophisticated wild degradation.

Downloads

Published

2025-04-11

How to Cite

Lin, Y., Ye, T., Chen, S., Fu, Z., Wang, Y., Chai, W., … Ding, X. (2025). AGLLDiff: Guiding Diffusion Models Towards Unsupervised Training-free Real-world Low-light Image Enhancement. Proceedings of the AAAI Conference on Artificial Intelligence, 39(5), 5307–5315. https://doi.org/10.1609/aaai.v39i5.32564

Issue

Section

AAAI Technical Track on Computer Vision IV