ImagePiece: Content-aware Re-tokenization for Efficient Image Recognition

Authors

  • Seungdong Yoa LG AI Research
  • Seungjun Lee LG AI Research
  • Hye-Seung Cho LG AI Research
  • Bumsoo Kim Chung-Ang University
  • Woohyung Lim LG AI Research

DOI:

https://doi.org/10.1609/aaai.v39i9.33034

Abstract

Vision Transformers (ViTs) have achieved remarkable success in various computer vision tasks. However, ViTs have a huge computational cost due to their inherent reliance on multi-head self-attention (MHSA), prompting efforts to accelerate ViTs for practical applications. To this end, recent works aim to reduce the number of tokens, mainly focusing on how to effectively prune or merge them. Nevertheless, since ViT tokens are generated from non-overlapping grid patches, they usually do not convey sufficient semantics, making it incompatible with efficient ViTs. To address this, we propose ImagePiece, a novel re-tokenization strategy for Vision Transformers. Following the MaxMatch strategy of NLP tokenization, ImagePiece groups semantically insufficient yet locally coherent tokens until they convey meaning. This simple retokenization is highly compatible with previous token reduction methods, being able to drastically narrow down relevant tokens, enhancing the inference speed of DeiT-S by 54% (nearly 1.5x faster) while achieving a 0.39% improvement in ImageNet classification accuracy. For hyper-speed inference scenarios (with 251% acceleration), our approach surpasses other baselines by an accuracy over 8%.

Published

2025-04-11

How to Cite

Yoa, S., Lee, S., Cho, H.-S., Kim, B., & Lim, W. (2025). ImagePiece: Content-aware Re-tokenization for Efficient Image Recognition. Proceedings of the AAAI Conference on Artificial Intelligence, 39(9), 9544–9552. https://doi.org/10.1609/aaai.v39i9.33034

Issue

Section

AAAI Technical Track on Computer Vision VIII