Dual-Domain Attention for Image Deblurring

Authors

  • Yuning Cui Technical University of Munich
  • Yi Tao MIT Universal Village Program
  • Wenqi Ren Shenzhen Campus of Sun Yat-sen University
  • Alois Knoll Technical University of Munich

DOI:

https://doi.org/10.1609/aaai.v37i1.25122

Keywords:

CV: Low Level & Physics-Based Vision, CV: Applications, CV: Language and Vision, CV: Learning & Optimization for CV, CV: Other Foundations of Computer Vision, CV: Representation Learning for Vision, ML: Applications, ML: Deep Neural Architectures, ML: Deep Neural Network Algorithms

Abstract

As a long-standing and challenging task, image deblurring aims to reconstruct the latent sharp image from its degraded counterpart. In this study, to bridge the gaps between degraded/sharp image pairs in the spatial and frequency domains simultaneously, we develop the dual-domain attention mechanism for image deblurring. Self-attention is widely used in vision tasks, however, due to the quadratic complexity, it is not applicable to image deblurring with high-resolution images. To alleviate this issue, we propose a novel spatial attention module by implementing self-attention in the style of dynamic group convolution for integrating information from the local region, enhancing the representation learning capability and reducing computational burden. Regarding frequency domain learning, many frequency-based deblurring approaches either treat the spectrum as a whole or decompose frequency components in a complicated manner. In this work, we devise a frequency attention module to compactly decouple the spectrum into distinct frequency parts and accentuate the informative part with extremely lightweight learnable parameters. Finally, we incorporate attention modules into a U-shaped network. Extensive comparisons with prior arts on the common benchmarks show that our model, named Dual-domain Attention Network (DDANet), obtains comparable results with a significantly improved inference speed.

Downloads

Published

2023-06-26

How to Cite

Cui, Y., Tao, Y., Ren, W., & Knoll, A. (2023). Dual-Domain Attention for Image Deblurring. Proceedings of the AAAI Conference on Artificial Intelligence, 37(1), 479-487. https://doi.org/10.1609/aaai.v37i1.25122

Issue

Section

AAAI Technical Track on Computer Vision I