Strong Baselines for Parameter-Efficient Few-Shot Fine-Tuning
DOI:
https://doi.org/10.1609/aaai.v38i10.28978Keywords:
ML: Transfer, Domain Adaptation, Multi-Task Learning, CV: Applications, CV: Large Vision Models, CV: Object Detection & CategorizationAbstract
Few-shot classification (FSC) entails learning novel classes given only a few examples per class after a pre-training (or meta-training) phase on a set of base classes. Recent works have shown that simply fine-tuning a pre-trained Vision Transformer (ViT) on new test classes is a strong approach for FSC. Fine-tuning ViTs, however, is expensive in time, compute and storage. This has motivated the design of parameter efficient fine-tuning (PEFT) methods which fine-tune only a fraction of the Transformer's parameters. While these methods have shown promise, inconsistencies in experimental conditions make it difficult to disentangle their advantage from other experimental factors including the feature extractor architecture, pre-trained initialization and fine-tuning algorithm, amongst others. In our paper, we conduct a large-scale, experimentally consistent, empirical analysis to study PEFTs for few-shot image classification. Through a battery of over 1.8k controlled experiments on large-scale few-shot benchmarks including Meta-Dataset and ORBIT, we uncover novel insights on PEFTs that cast light on their efficacy in fine-tuning ViTs for few-shot classification. Through our controlled empirical study, we have two main findings: (i) Fine-tuning just the LayerNorm parameters (which we call LN-Tune) during few-shot adaptation is an extremely strong baseline across ViTs pre-trained with both self-supervised and supervised objectives, (ii) For self-supervised ViTs, we find that simply learning a set of scaling parameters for each attention matrix (which we call Attn-Scale) along with a domain-residual adapter (DRA) module leads to state-of-the-art performance (while being ~9x more parameter-efficient) on Meta-Dataset. Our empirical findings set strong baselines and call for rethinking the current design of PEFT methods for FSC.Downloads
Published
2024-03-24
How to Cite
Basu, S., Hu, S., Massiceti, D., & Feizi, S. (2024). Strong Baselines for Parameter-Efficient Few-Shot Fine-Tuning. Proceedings of the AAAI Conference on Artificial Intelligence, 38(10), 11024-11031. https://doi.org/10.1609/aaai.v38i10.28978
Issue
Section
AAAI Technical Track on Machine Learning I