TransFG: A Transformer Architecture for Fine-Grained Recognition

Authors

  • Ju He Johns Hopkins University
  • Jie-Neng Chen Johns Hopkins University
  • Shuai Liu ByteDance Inc.
  • Adam Kortylewski Max Planck Institute for Informatics
  • Cheng Yang ByteDance Inc.
  • Yutong Bai Johns Hopkins University
  • Changhu Wang ByteDance Inc.

DOI:

https://doi.org/10.1609/aaai.v36i1.19967

Keywords:

Computer Vision (CV)

Abstract

Fine-grained visual classification (FGVC) which aims at recognizing objects from subcategories is a very challenging task due to the inherently subtle inter-class differences. Most existing works mainly tackle this problem by reusing the backbone network to extract features of detected discriminative regions. However, this strategy inevitably complicates the pipeline and pushes the proposed regions to contain most parts of the objects thus fails to locate the really important parts. Recently, vision transformer (ViT) shows its strong performance in the traditional classification task. The self-attention mechanism of the transformer links every patch token to the classification token. In this work, we first evaluate the effectiveness of the ViT framework in the fine-grained recognition setting. Then motivated by the strength of the attention link can be intuitively considered as an indicator of the importance of tokens, we further propose a novel Part Selection Module that can be applied to most of the transformer architectures where we integrate all raw attention weights of the transformer into an attention map for guiding the network to effectively and accurately select discriminative image patches and compute their relations. A contrastive loss is applied to enlarge the distance between feature representations of confusing classes. We name the augmented transformer-based model TransFG and demonstrate the value of it by conducting experiments on five popular fine-grained benchmarks where we achieve state-of-the-art performance. Qualitative results are presented for better understanding of our model.

Downloads

Published

2022-06-28

How to Cite

He, J., Chen, J.-N., Liu, S., Kortylewski, A., Yang, C., Bai, Y., & Wang, C. (2022). TransFG: A Transformer Architecture for Fine-Grained Recognition. Proceedings of the AAAI Conference on Artificial Intelligence, 36(1), 852-860. https://doi.org/10.1609/aaai.v36i1.19967

Issue

Section

AAAI Technical Track on Computer Vision I