FFT-Based Dynamic Token Mixer for Vision

Authors

  • Yuki Tatsunami Rikkyo University AnyTech Co., Ltd.
  • Masato Taki Rikkyo University

DOI:

https://doi.org/10.1609/aaai.v38i14.29457

Keywords:

ML: Deep Neural Architectures and Foundation Models, CV: Object Detection & Categorization, CV: Segmentation, ML: Deep Learning Algorithms

Abstract

Multi-head-self-attention (MHSA)-equipped models have achieved notable performance in computer vision. Their computational complexity is proportional to quadratic numbers of pixels in input feature maps, resulting in slow processing, especially when dealing with high-resolution images. New types of token-mixer are proposed as an alternative to MHSA to circumvent this problem: an FFT-based token-mixer involves global operations similar to MHSA but with lower computational complexity. However, despite its attractive properties, the FFT-based token-mixer has not been carefully examined in terms of its compatibility with the rapidly evolving MetaFormer architecture. Here, we propose a novel token-mixer called Dynamic Filter and novel image recognition models, DFFormer and CDFFormer, to close the gaps above. The results of image classification and downstream tasks, analysis, and visualization show that our models are helpful. Notably, their throughput and memory efficiency when dealing with high-resolution image recognition is remarkable. Our results indicate that Dynamic Filter is one of the token-mixer options that should be seriously considered. The code is available at https://github.com/okojoalg/dfformer

Published

2024-03-24

How to Cite

Tatsunami, Y., & Taki, M. (2024). FFT-Based Dynamic Token Mixer for Vision. Proceedings of the AAAI Conference on Artificial Intelligence, 38(14), 15328-15336. https://doi.org/10.1609/aaai.v38i14.29457

Issue

Section

AAAI Technical Track on Machine Learning V