GEM: A Scale-Aware and Distribution-Sensitive Sparse Fine-Tuning Framework for Effective Downstream Adaptation

Authors

  • Sungmin Kang University of Southern California
  • Jisoo Kim Inha University
  • Salman Avestimehr University of Southern California
  • Sunwoo Lee Inha University

DOI:

https://doi.org/10.1609/aaai.v40i27.39410

Abstract

Parameter-efficient fine-tuning (PEFT) has become a popular way to adapt large pre-trained models to new tasks. Most PEFT methods update only a small subset of parameters while freezing the rest, avoiding redundant computation. As they maximize the absolute size of the updates without regard to the parameters’ original scale, the resulting changes in model behavior can be minimal. In contrast, we maximize updates relative to each parameter’s scale, yielding more meaningful downstream adaptation. We propose Gradient-to-Weight Ratio and Entropy-guided Masking (GEM), a parameter scale-aware, distribution-sensitive sparse fine-tuning framework. GEM prioritizes parameters whose updates are significant in proportion to their initial pre-trained values. It also adaptively determines how many parameters to tune at each layer based on the entropy of parameter values, thereby making the most effective use of the computational budget in PEFT. Our empirical study demonstrates the efficacy of GEM on both general-domain tasks (GLUE and SuperGLUE) and domain-specific tasks (GSM8k and MBPP), achieving up to a 1.6% improvement in fine-tuning accuracy over full fine-tuning while updating only 0.1% of model parameters.

Published

2026-03-14

How to Cite

Kang, S., Kim, J., Avestimehr, S., & Lee, S. (2026). GEM: A Scale-Aware and Distribution-Sensitive Sparse Fine-Tuning Framework for Effective Downstream Adaptation. Proceedings of the AAAI Conference on Artificial Intelligence, 40(27), 22509-22517. https://doi.org/10.1609/aaai.v40i27.39410

Issue

Section

AAAI Technical Track on Machine Learning IV