Learning from History: Task-agnostic Model Contrastive Learning for Image Restoration

Authors

  • Gang Wu Harbin Institute of Technology
  • Junjun Jiang Harbin Institute of Technology
  • Kui Jiang Harbin Institute of Technology
  • Xianming Liu Harbin Institute of Technology

DOI:

https://doi.org/10.1609/aaai.v38i6.28412

Keywords:

CV: Low Level & Physics-based Vision

Abstract

Contrastive learning has emerged as a prevailing paradigm for high-level vision tasks, which, by introducing properly negative samples, has also been exploited for low-level vision tasks to achieve a compact optimization space to account for their ill-posed nature. However, existing methods rely on manually predefined and task-oriented negatives, which often exhibit pronounced task-specific biases. To address this challenge, our paper introduces an innovative method termed 'learning from history', which dynamically generates negative samples from the target model itself. Our approach, named Model Contrastive Learning for Image Restoration (MCLIR), rejuvenates latency models as negative models, making it compatible with diverse image restoration tasks. We propose the Self-Prior guided Negative loss (SPN) to enable it. This approach significantly enhances existing models when retrained with the proposed model contrastive paradigm. The results show significant improvements in image restoration across various tasks and architectures. For example, models retrained with SPN outperform the original FFANet and DehazeFormer by 3.41 and 0.57 dB on the RESIDE indoor dataset for image dehazing. Similarly, they achieve notable improvements of 0.47 dB on SPA-Data over IDT for image deraining and 0.12 dB on Manga109 for a 4x scale super-resolution over lightweight SwinIR, respectively. Code and retrained models are available at https://github.com/Aitical/MCLIR.

Published

2024-03-24

How to Cite

Wu, G., Jiang, J., Jiang, K., & Liu, X. (2024). Learning from History: Task-agnostic Model Contrastive Learning for Image Restoration. Proceedings of the AAAI Conference on Artificial Intelligence, 38(6), 5976-5984. https://doi.org/10.1609/aaai.v38i6.28412

Issue

Section

AAAI Technical Track on Computer Vision V