Content-Variant Reference Image Quality Assessment via Knowledge Distillation

Authors

  • Guanghao Yin Zhejiang University
  • Wei Wang Bytedance.Inc
  • Zehuan Yuan Bytedance.Inc
  • Chuchu Han Huazhong University of Science and Technology
  • Wei Ji National University of Singapore
  • Shouqian Sun Zhejiang University
  • Changhu Wang ByteDance.Inc

DOI:

https://doi.org/10.1609/aaai.v36i3.20221

Keywords:

Computer Vision (CV)

Abstract

Generally, humans are more skilled at perceiving differences between high-quality (HQ) and low-quality (LQ) images than directly judging the quality of a single LQ image. This situation also applies to image quality assessment (IQA). Although recent no-reference (NR-IQA) methods have made great progress to predict image quality free from the reference image, they still have the potential to achieve better performance since HQ image information is not fully exploited. In contrast, full-reference (FR-IQA) methods tend to provide more reliable quality evaluation, but its practicability is affected by the requirement for pixel-level aligned reference images. To address this, we firstly propose the content-variant reference method via knowledge distillation (CVRKD-IQA). Specifically, we use non-aligned reference (NAR) images to introduce various prior distributions of high-quality images. The comparisons of distribution differences between HQ and LQ images can help our model better assess the image quality. Further, the knowledge distillation transfers more HQ-LQ distribution difference information from the FR-teacher to the NAR-student and stabilizing CVRKD-IQA performance. Moreover, to fully mine the local-global combined information, while achieving faster inference speed, our model directly processes multiple image patches from the input with the MLP-mixer. Cross-dataset experiments verify that our model can outperform all NAR/NR-IQA SOTAs, even reach comparable performance than FR-IQA methods on some occasions. Since the content-variant and non-aligned reference HQ images are easy to obtain, our model can support more IQA applications with its robustness to content variations. Our code is available: https://github.com/guanghaoyin/CVRKD-IQA.

Downloads

Published

2022-06-28

How to Cite

Yin, G., Wang, W., Yuan, Z., Han, C., Ji, W., Sun, S., & Wang, C. (2022). Content-Variant Reference Image Quality Assessment via Knowledge Distillation. Proceedings of the AAAI Conference on Artificial Intelligence, 36(3), 3134-3142. https://doi.org/10.1609/aaai.v36i3.20221

Issue

Section

AAAI Technical Track on Computer Vision III