Vision Transformer Off-the-Shelf: A Surprising Baseline for Few-Shot Class-Agnostic Counting

Authors

  • Zhicheng Wang Huazhong Univ. of Sci.&Tech.
  • Liwen Xiao Huazhong Univ. of Sci.&Tech.
  • Zhiguo Cao Huazhong Univ. of Sci.&Tech.
  • Hao Lu Huazhong Univ. of Sci.&Tech.

DOI:

https://doi.org/10.1609/aaai.v38i6.28396

Keywords:

CV: Applications, CV: Interpretability, Explainability, and Transparency

Abstract

Class-agnostic counting (CAC) aims to count objects of interest from a query image given few exemplars. This task is typically addressed by extracting the features of query image and exemplars respectively and then matching their feature similarity, leading to an extract-then-match paradigm. In this work, we show that CAC can be simplified in an extract-and-match manner, particularly using a vision transformer (ViT) where feature extraction and similarity matching are executed simultaneously within the self-attention. We reveal the rationale of such simplification from a decoupled view of the self-attention.The resulting model, termed CACViT, simplifies the CAC pipeline into a single pretrained plain ViT. Further, to compensate the loss of the scale and the order-of-magnitude information due to resizing and normalization in plain ViT, we present two effective strategies for scale and magnitude embedding. Extensive experiments on the FSC147 and the CARPK datasets show that CACViT significantly outperforms state-of-the-art CAC approaches in both effectiveness (23.60% error reduction) and generalization, which suggests CACViT provides a concise and strong baseline for CAC. Code will be available.

Published

2024-03-24

How to Cite

Wang, Z., Xiao, L., Cao, Z., & Lu, H. (2024). Vision Transformer Off-the-Shelf: A Surprising Baseline for Few-Shot Class-Agnostic Counting. Proceedings of the AAAI Conference on Artificial Intelligence, 38(6), 5832–5840. https://doi.org/10.1609/aaai.v38i6.28396

Issue

Section

AAAI Technical Track on Computer Vision V