Data-Efficient Image Quality Assessment with Attention-Panel Decoder


  • Guanyi Qin Tsinghua University
  • Runze Hu Beijing Institute of Technology
  • Yutao Liu Ocean University of China
  • Xiawu Zheng Peng Cheng Laboratory Xiamen University
  • Haotian Liu TsingHua University
  • Xiu Li Tsinghua University
  • Yan Zhang Xiamen University



CV: Representation Learning for Vision, CV: Applications, CV: Other Foundations of Computer Vision


Blind Image Quality Assessment (BIQA) is a fundamental task in computer vision, which however remains unresolved due to the complex distortion conditions and diversified image contents. To confront this challenge, we in this paper propose a novel BIQA pipeline based on the Transformer architecture, which achieves an efficient quality-aware feature representation with much fewer data. More specifically, we consider the traditional fine-tuning in BIQA as an interpretation of the pre-trained model. In this way, we further introduce a Transformer decoder to refine the perceptual information of the CLS token from different perspectives. This enables our model to establish the quality-aware feature manifold efficiently while attaining a strong generalization capability. Meanwhile, inspired by the subjective evaluation behaviors of human, we introduce a novel attention panel mechanism, which improves the model performance and reduces the prediction uncertainty simultaneously. The proposed BIQA method maintains a light-weight design with only one layer of the decoder, yet extensive experiments on eight standard BIQA datasets (both synthetic and authentic) demonstrate its superior performance to the state-of-the-art BIQA methods, i.e., achieving the SRCC values of 0.875 (vs. 0.859 in LIVEC) and 0.980 (vs. 0.969 in LIVE). Checkpoints, logs and code will be available at




How to Cite

Qin, G., Hu, R., Liu, Y., Zheng, X., Liu, H., Li, X., & Zhang, Y. (2023). Data-Efficient Image Quality Assessment with Attention-Panel Decoder. Proceedings of the AAAI Conference on Artificial Intelligence, 37(2), 2091-2100.



AAAI Technical Track on Computer Vision II