Learning Spatial Decay for Vision Transformers

Authors

  • Yuxin Mao School of Electronics and Information, Northwestern Polytechnical University, and Shaanxi Key Laboratory of Information Acquisition and Processing, Xi'an, China
  • Zhen Qin TapTap
  • Jinxing Zhou OpenNLPLab
  • Bin Fan School of Electronics and Information, Northwestern Polytechnical University, and Shaanxi Key Laboratory of Information Acquisition and Processing, Xi'an, China
  • Jing Zhang School of Electronics and Information, Northwestern Polytechnical University, and Shaanxi Key Laboratory of Information Acquisition and Processing, Xi'an, China
  • Yiran Zhong OpenNLPLab
  • Yuchao Dai School of Electronics and Information, Northwestern Polytechnical University, and Shaanxi Key Laboratory of Information Acquisition and Processing, Xi'an, China

DOI:

https://doi.org/10.1609/aaai.v40i10.37739

Abstract

Vision Transformers (ViTs) have revolutionized computer vision, yet their self-attention mechanism lacks explicit spatial inductive biases, leading to suboptimal performance on spatially-structured tasks. Existing approaches introduce data-independent spatial decay based on fixed distance metrics, applying uniform attention weighting regardless of image content and limiting adaptability to diverse visual scenarios. Inspired by recent advances in large language models where content-aware gating mechanisms (e.g., GLA, HGRN2, FOX) significantly outperform static alternatives, we present the first successful adaptation of data-dependent spatial decay to 2D vision transformers. We introduce Spatial Decay Transformer (SDT), featuring a novel Context-Aware Gating (CAG) mechanism that generates dynamic, data-dependent decay for patch interactions. Our approach learns to modulate spatial attention based on both content relevance and spatial proximity. We address the fundamental challenge of 1D-to-2D adaptation through a unified spatial-content fusion framework that integrates manhattan distance-based spatial priors with learned content representations. Extensive experiments on ImageNet-1K classification and generation tasks demonstrate consistent improvements over strong baselines. Our work establishes data-dependent spatial decay as a new paradigm for enhancing spatial attention in vision transformers.

Downloads

Published

2026-03-14

How to Cite

Mao, Y., Qin, Z., Zhou, J., Fan, B., Zhang, J., Zhong, Y., & Dai, Y. (2026). Learning Spatial Decay for Vision Transformers. Proceedings of the AAAI Conference on Artificial Intelligence, 40(10), 7945-7953. https://doi.org/10.1609/aaai.v40i10.37739

Issue

Section

AAAI Technical Track on Computer Vision VII