Less Is More: Rethinking Parameter-Efficient Fine-Tuning from a Subtractive Perspective

Authors

  • Tianqi Jiang Tianjin University
  • Liu Yang Tianjin University
  • Xi-Le Zhao University of Electronic Science and Technology of China
  • Zixuan Qin Tianjin University
  • Qinghua Hu Tianjin University

DOI:

https://doi.org/10.1609/aaai.v40i7.37460

Abstract

Currently, pretrained models are rapidly scaling in size, which substantially increases the cost of fine-tuning them for downstream tasks. To address this challenge, parameter-efficient fine-tuning (PEFT) methods have been developed to optimize a minimal set of parameters for adaptation. While current PEFT approaches predominantly employ an "additive'' strategy, introducing learnable modules into inputs or architectures, neglect the inherent knowledge embedded within pretrained models, which may be redundant or even conflict with downstream tasks. This limitation leads to increased inference latency and suboptimal transfer performance, particularly in scenarios with significant domain gaps. In this paper, we propose a Subtractive Fine-tuning Paradigm(SFP), which converts multiple redundant operations within the original module into a linear transformation to enhance inference speed and model performance. Specifically, we introduce a compact filter block to replace specific module with interference and redundancy in the original structure to reduce model conflicts. By using a pseudo inverse matrix to construct filter block, ensuring that it can inherit the knowledge of the replacement module, and then freezing the rest of the model, only fine-tuning the filter block is performed to eliminate interference and redundant knowledge, thereby enhancing the model’s adaptability to downstream tasks. Experimental results demonstrate that our SFP outperforms existing PEFT methods in accuracy while decreasing the overall model parameters by 12%. Compared to full fine-tuning, the accuracy has increased by 8.47%(74.04% vs. 65.57%, VTAB).

Downloads

Published

2026-03-14

How to Cite

Jiang, T., Yang, L., Zhao, X.-L., Qin, Z., & Hu, Q. (2026). Less Is More: Rethinking Parameter-Efficient Fine-Tuning from a Subtractive Perspective. Proceedings of the AAAI Conference on Artificial Intelligence, 40(7), 5432–5440. https://doi.org/10.1609/aaai.v40i7.37460

Issue

Section

AAAI Technical Track on Computer Vision IV