Underwater Ranker: Learn Which Is Better and How to Be Better
Keywords:CV: Low Level & Physics-Based Vision, CV: Computational Photography, Image & Video Synthesis
AbstractIn this paper, we present a ranking-based underwater image quality assessment (UIQA) method, abbreviated as URanker. The URanker is built on the efficient conv-attentional image Transformer. In terms of underwater images, we specially devise (1) the histogram prior that embeds the color distribution of an underwater image as histogram token to attend global degradation and (2) the dynamic cross-scale correspondence to model local degradation. The final prediction depends on the class tokens from different scales, which comprehensively considers multi-scale dependencies. With the margin ranking loss, our URanker can accurately rank the order of underwater images of the same scene enhanced by different underwater image enhancement (UIE) algorithms according to their visual quality. To achieve that, we also contribute a dataset, URankerSet, containing sufficient results enhanced by different UIE algorithms and the corresponding perceptual rankings, to train our URanker. Apart from the good performance of URanker, we found that a simple U-shape UIE network can obtain promising performance when it is coupled with our pre-trained URanker as additional supervision. In addition, we also propose a normalization tail that can significantly improve the performance of UIE networks. Extensive experiments demonstrate the state-of-the-art performance of our method. The key designs of our method are discussed. Our code and dataset are available at https://li-chongyi.github.io/URanker_files/.
How to Cite
Guo, C., Wu, R., Jin, X., Han, L., Zhang, W., Chai, Z., & Li, C. (2023). Underwater Ranker: Learn Which Is Better and How to Be Better. Proceedings of the AAAI Conference on Artificial Intelligence, 37(1), 702-709. https://doi.org/10.1609/aaai.v37i1.25147
AAAI Technical Track on Computer Vision I