On the Effectiveness of Parameter-Efficient Fine-Tuning

Authors

  • Zihao Fu University of Cambridge
  • Haoran Yang The Chinese University of Hong Kong
  • Anthony Man-Cho So The Chinese University of Hong Kong
  • Wai Lam The Chinese University of Hong Kong
  • Lidong Bing DAMO Academy, Alibaba Group
  • Nigel Collier University of Cambridge

DOI:

https://doi.org/10.1609/aaai.v37i11.26505

Keywords:

SNLP: Interpretability & Analysis of NLP Models, SNLP: Adversarial Attacks & Robustness, SNLP: Other Foundations of Speech & Natural Language Processing

Abstract

Fine-tuning pre-trained models has been ubiquitously proven to be effective in a wide range of NLP tasks. However, fine-tuning the whole model is parameter inefficient as it always yields an entirely new model for each task. Currently, many research works propose to only fine-tune a small portion of the parameters while keeping most of the parameters shared across different tasks. These methods achieve surprisingly good performance and are shown to be more stable than their corresponding fully fine-tuned counterparts. However, such kind of methods is still not well understood. Some natural questions arise: How does the parameter sparsity lead to promising performance? Why is the model more stable than the fully fine-tuned models? How to choose the tunable parameters? In this paper, we first categorize the existing methods into random approaches, rule-based approaches, and projection-based approaches based on how they choose which parameters to tune. Then, we show that all of the methods are actually sparse fine-tuned models and conduct a novel theoretical analysis of them. We indicate that the sparsity is actually imposing a regularization on the original model by controlling the upper bound of the stability. Such stability leads to better generalization capability which has been empirically observed in a lot of recent research works. Despite the effectiveness of sparsity grounded by our theory, it still remains an open problem of how to choose the tunable parameters. Currently, the random and rule-based methods do not utilize task-specific data information while the projection-based approaches suffer from the projection discontinuity problem. To better choose the tunable parameters, we propose a novel Second-order Approximation Method (SAM) which approximates the original problem with an analytically solvable optimization function. The tunable parameters are determined by directly optimizing the approximation function. We conduct extensive experiments on several tasks. The experimental results show that our proposed SAM model outperforms many strong baseline models and it also verifies our theoretical analysis. The source code of this paper can be obtained from https://github.com/fuzihaofzh/AnalyzeParameterEff\/icientFinetune .

Downloads

Published

2023-06-26

How to Cite

Fu, Z., Yang, H., So, A. M.-C., Lam, W., Bing, L., & Collier, N. (2023). On the Effectiveness of Parameter-Efficient Fine-Tuning. Proceedings of the AAAI Conference on Artificial Intelligence, 37(11), 12799-12807. https://doi.org/10.1609/aaai.v37i11.26505

Issue

Section

AAAI Technical Track on Speech & Natural Language Processing