Spatial-Spectral Transformer for Hyperspectral Image Denoising

Authors

  • Miaoyu Li Beijing Institue of Technology
  • Ying Fu Beijing Institute of Technology
  • Yulun Zhang ETH Zurich

DOI:

https://doi.org/10.1609/aaai.v37i1.25221

Keywords:

CV: Low Level & Physics-Based Vision, CV: Computational Photography, Image & Video Synthesis

Abstract

Hyperspectral image (HSI) denoising is a crucial preprocessing procedure for the subsequent HSI applications. Unfortunately, though witnessing the development of deep learning in HSI denoising area, existing convolution-based methods face the trade-off between computational efficiency and capability to model non-local characteristics of HSI. In this paper, we propose a Spatial-Spectral Transformer (SST) to alleviate this problem. To fully explore intrinsic similarity characteristics in both spatial dimension and spectral dimension, we conduct non-local spatial self-attention and global spectral self-attention with Transformer architecture. The window-based spatial self-attention focuses on the spatial similarity beyond the neighboring region. While, the spectral self-attention exploits the long-range dependencies between highly correlative bands. Experimental results show that our proposed method outperforms the state-of-the-art HSI denoising methods in quantitative quality and visual results. The code is released at https://github.com/MyuLi/SST.

Downloads

Published

2023-06-26

How to Cite

Li, M., Fu, Y., & Zhang, Y. (2023). Spatial-Spectral Transformer for Hyperspectral Image Denoising. Proceedings of the AAAI Conference on Artificial Intelligence, 37(1), 1368-1376. https://doi.org/10.1609/aaai.v37i1.25221

Issue

Section

AAAI Technical Track on Computer Vision I