Unsupervised Hierarchical Domain Adaptation for Adverse Weather Optical Flow
DOI:
https://doi.org/10.1609/aaai.v37i3.25490Keywords:
CV: Low Level & Physics-Based Vision, CV: Motion & TrackingAbstract
Optical flow estimation has made great progress, but usually suffers from degradation under adverse weather. Although semi/full-supervised methods have made good attempts, the domain shift between the synthetic and real adverse weather images would deteriorate their performance. To alleviate this issue, our start point is to unsupervisedly transfer the knowledge from source clean domain to target degraded domain. Our key insight is that adverse weather does not change the intrinsic optical flow of the scene, but causes a significant difference for the warp error between clean and degraded images. In this work, we propose the first unsupervised framework for adverse weather optical flow via hierarchical motion-boundary adaptation. Specifically, we first employ image translation to construct the transformation relationship between clean and degraded domains. In motion adaptation, we utilize the flow consistency knowledge to align the cross-domain optical flows into a motion-invariance common space, where the optical flow from clean weather is used as the guidance-knowledge to obtain a preliminary optical flow for adverse weather. Furthermore, we leverage the warp error inconsistency which measures the motion misalignment of the boundary between the clean and degraded domains, and propose a joint intra- and inter-scene boundary contrastive adaptation to refine the motion boundary. The hierarchical motion and boundary adaptation jointly promotes optical flow in a unified framework. Extensive quantitative and qualitative experiments have been performed to verify the superiority of the proposed method.Downloads
Published
2023-06-26
How to Cite
Zhou, H., Chang, Y., Chen, G., & Yan, L. (2023). Unsupervised Hierarchical Domain Adaptation for Adverse Weather Optical Flow. Proceedings of the AAAI Conference on Artificial Intelligence, 37(3), 3778-3786. https://doi.org/10.1609/aaai.v37i3.25490
Issue
Section
AAAI Technical Track on Computer Vision III