Low-Light Image Enhancement Network Based on Multi-Scale Feature Complementation
DOI:
https://doi.org/10.1609/aaai.v37i3.25427Keywords:
CV: Applications, ML: Applications, ML: Deep Neural Architectures, ML: Deep Neural Network AlgorithmsAbstract
Images captured in low-light environments have problems of insufficient brightness and low contrast, which will affect subsequent image processing tasks. Although most current enhancement methods can obtain high-contrast images, they still suffer from noise amplification and color distortion. To address these issues, this paper proposes a low-light image enhancement network based on multi-scale feature complementation (LIEN-MFC), which is a U-shaped encoder-decoder network supervised by multiple images of different scales. In the encoder, four feature extraction branches are constructed to extract features of low-light images at different scales. In the decoder, to ensure the integrity of the learned features at each scale, a feature supplementary fusion module (FSFM) is proposed to complement and integrate features from different branches of the encoder and decoder. In addition, a feature restoration module (FRM) and an image reconstruction module (IRM) are built in each branch to reconstruct the restored features and output enhanced images. To better train the network, a joint loss function is defined, in which a discriminative loss term is designed to ensure that the enhanced results better meet the visual properties of the human eye. Extensive experiments on benchmark datasets show that the proposed method outperforms some state-of-the-art methods subjectively and objectively.Downloads
Published
2023-06-26
How to Cite
Yang, Y., Xu, W., Huang, S., & Wan, W. (2023). Low-Light Image Enhancement Network Based on Multi-Scale Feature Complementation. Proceedings of the AAAI Conference on Artificial Intelligence, 37(3), 3214-3221. https://doi.org/10.1609/aaai.v37i3.25427
Issue
Section
AAAI Technical Track on Computer Vision III