Analogical Image Translation for Fog Generation
DOI:
https://doi.org/10.1609/aaai.v35i2.16233Keywords:
Computational Photography, Image & Video SynthesisAbstract
Image-to-image translation is to map images from a given style to another given style. While exceptionally successful, current methods assume the availability of training images in both source and target domains, which does not always hold in practice. Inspired by humans' reasoning capability of analogy, we propose analogical image translation (AIT) that exploit the concept of gist, for the first time. Given images of two styles in the source domain: A and A', along with images B of the first style in the target domain, learn a model to translate B to B' in the target domain, such that A:A' :: B:B'. AIT is especially useful for translation scenarios in which training data of one style is hard to obtain but training data of the same two styles in another domain is available. For instance, in the case from normal conditions to extreme, rare conditions, obtaining real training images for the latter case is challenging. However, obtaining synthetic data for both cases is relatively easy. In this work, we aim at adding adverse weather effects, more specifically fog, to images taken in clear weather. To circumvent the challenge of collecting real foggy images, AIT learns the gist of translating synthetic clear-weather to foggy images, followed by adding fog effects onto real clear-weather images, without ever seeing any real foggy image. AIT achieves zero-shot image translation capability, whose effectiveness and benefit are demonstrated by the downstream task of semantic foggy scene understanding.Downloads
Published
2021-05-18
How to Cite
Gong, R., Dai, D., Chen, Y., Li, W., Paudel, D. P., & Van Gool, L. (2021). Analogical Image Translation for Fog Generation. Proceedings of the AAAI Conference on Artificial Intelligence, 35(2), 1433-1441. https://doi.org/10.1609/aaai.v35i2.16233
Issue
Section
AAAI Technical Track on Computer Vision I