EfficientFSL: Enhancing Few-Shot Classification via Query-Only Tuning In Vision Transformers
DOI:
https://doi.org/10.1609/aaai.v40i28.39519Abstract
Large models such as Vision Transformers (ViTs) have demonstrated remarkable superiority over smaller architectures like ResNet in few-shot classification, owing to their powerful representational capacity. However, fine-tuning such large models demands extensive GPU memory and prolonged training time, making them impractical for many real-world low-resource scenarios. To bridge this gap, we propose EfficientFSL, a query-only fine-tuning framework tailored specifically for few-shot classification with ViT, which achieves competitive performance while significantly reducing computational overhead. EfficientFSL fully leverages the knowledge embedded in the pre-trained model and its strong comprehension ability, achieving high classification accuracy with an extremely small number of tunable parameters. Specifically, we introduce a lightweight trainable Forward Block to synthesize task-specific queries that extract informative features from the intermediate representations of the pre-trained model in a query-only manner. We further propose a Combine Block to fuse multi-layer outputs, enhancing the depth and robustness of feature representations. Finally, a Support-Query Attention Block mitigates distribution shift by adjusting prototypes to align with the query set distribution. With minimal trainable parameters, EfficientFSL achieves state-of-the-art performance on four in-domain few-shot datasets and six cross-domain datasets, demonstrating its effectiveness in real-world applications.Downloads
Published
2026-03-14
How to Cite
Liao, W., Ruan, H., Yu, J., Song, B., Wang, Y., & Yang, X. (2026). EfficientFSL: Enhancing Few-Shot Classification via Query-Only Tuning In Vision Transformers. Proceedings of the AAAI Conference on Artificial Intelligence, 40(28), 23478–23486. https://doi.org/10.1609/aaai.v40i28.39519
Issue
Section
AAAI Technical Track on Machine Learning V