DNIT: Enhancing Day-Night Image-to-Image Translation through Fine-Grained Feature Handling (Student Abstract)

Authors

  • Hanyue Liu School of Information and Communication Engineering, Communication University of China, Beijing, China
  • Haonan Cheng State Key Laboratory of Media Convergence and Communication, Communication University of China, Beijing, China
  • Long Ye State Key Laboratory of Media Convergence and Communication, Communication University of China, Beijing, China School of Data Science and Media Intelligence, Communication University of China, Beijing, China

DOI:

https://doi.org/10.1609/aaai.v38i21.30474

Keywords:

Image-to-image Translation, Nighttime Image Pre-processing, Edge Extraction

Abstract

Existing image-to-image translation methods perform less satisfactorily in the "day-night" domain due to insufficient scene feature study. To address this problem, we propose DNIT, which performs fine-grained handling of features by a nighttime image preprocessing (NIP) module and an edge fusion detection (EFD) module. The NIP module enhances brightness while minimizing noise, facilitating extracting content and style features. Meanwhile, the EFD module utilizes two types of edge images as additional constraints to optimize the generator. Experimental results show that we can generate more realistic and higher-quality images compared to other methods, proving the effectiveness of our DNIT.

Published

2024-03-24

How to Cite

Liu, H., Cheng, H., & Ye, L. (2024). DNIT: Enhancing Day-Night Image-to-Image Translation through Fine-Grained Feature Handling (Student Abstract). Proceedings of the AAAI Conference on Artificial Intelligence, 38(21), 23563-23564. https://doi.org/10.1609/aaai.v38i21.30474