Learning Semantic Degradation-Aware Guidance for Recognition-Driven Unsupervised Low-Light Image Enhancement

Authors

  • Naishan Zheng University of Science and Technology of China
  • Jie Huang University of Science and Technology of China
  • Man Zhou University of Science and Technology of China
  • Zizheng Yang University of Science and Technology of China
  • Qi Zhu University of Science and Technology of China
  • Feng Zhao University of Science and Technology of China

DOI:

https://doi.org/10.1609/aaai.v37i3.25479

Keywords:

CV: Low Level & Physics-Based Vision

Abstract

Low-light images suffer severe degradation of low lightness and noise corruption, causing unsatisfactory visual quality and visual recognition performance. To solve this problem while meeting the unavailability of paired datasets in wide-range scenarios, unsupervised low-light image enhancement (ULLIE) techniques have been developed. However, these methods are primarily guided to alleviate the degradation effect on visual quality rather than semantic levels, hence limiting their performance in visual recognition tasks. To this end, we propose to learn a Semantic Degradation-Aware Guidance (SDAG) that perceives the low-light degradation effect on semantic levels in a self-supervised manner, which is further utilized to guide the ULLIE methods. The proposed SDAG utilizes the low-light degradation factors as augmented signals to degrade the low-light images, and then capture their degradation effect on semantic levels. Specifically, our SDAG employs the subsequent pre-trained recognition model extractor to extract semantic representations, and then learns to self-reconstruct the enhanced low-light image and its augmented degraded images. By constraining the relative reconstruction effect between the original enhanced image and the augmented formats, our SDAG learns to be aware of the degradation effect on semantic levels in a relative comparison manner. Moreover, our SDAG is general and can be plugged into the training paradigm of the existing ULLIE methods. Extensive experiments demonstrate its effectiveness for improving the ULLIE approaches on the downstream recognition tasks while maintaining a competitive visual quality. Code will be available at https://github.com/zheng980629/SDAG.

Downloads

Published

2023-06-26

How to Cite

Zheng, N., Huang, J., Zhou, M., Yang, Z., Zhu, Q., & Zhao, F. (2023). Learning Semantic Degradation-Aware Guidance for Recognition-Driven Unsupervised Low-Light Image Enhancement. Proceedings of the AAAI Conference on Artificial Intelligence, 37(3), 3678-3686. https://doi.org/10.1609/aaai.v37i3.25479

Issue

Section

AAAI Technical Track on Computer Vision III