Put the Space of LoRA Initialization to the Extreme to Preserve Pre-trained Knowledge

Authors

  • Pengwei Tang Gaoling School of Artificial Intelligence, Renmin University of China, Beijing, China, Beijing Key Laboratory of Research on Large Models and Intelligent Governance, Engineering Research Center of Next-Generation Intelligent Search and Recommendation, MOE,
  • Xiaolin Hu Key Laboratory of Multimedia Trusted Perception and Efficient Computing, Ministry of Education of China, Xiamen University,
  • Yong Liu Gaoling School of Artificial Intelligence, Renmin University of China, Beijing, China, Beijing Key Laboratory of Research on Large Models and Intelligent Governance, Engineering Research Center of Next-Generation Intelligent Search and Recommendation, MOE,
  • Lizhong Ding Beijing Institute of Technology,
  • Dongjie Zhang Xiaohongshu Inc.,
  • Xing Wu Xiaohongshu Inc., Institute of Information Engineering, Chinese Academy of Sciences
  • Debing Zhang Xiaohongshu Inc.,

DOI:

https://doi.org/10.1609/aaai.v40i39.40608

Abstract

Low-Rank Adaptation (LoRA) is the leading parameter-efficient fine-tuning method for Large Language Models (LLMs), but it still suffers from catastrophic forgetting. Recent work has shown that specialized LoRA initialization can alleviate catastrophic forgetting. There are currently two approaches to LoRA initialization aimed at preventing knowledge forgetting during fine-tuning: (1) making residual weights close to pre-trained weights, and (2) ensuring the space of LoRA initialization is orthogonal to pre-trained knowledge. The former is what current methods strive to achieve, while the importance of the latter is not sufficiently recognized. We find that the space of LoRA initialization is the key to preserving pre-trained knowledge rather than the residual weights. Existing methods like MiLoRA propose making the LoRA initialization space orthogonal to pre-trained weights. However, MiLoRA utilizes the null space of pre-trained weights. Compared to pre-trained weights, the input activations of pre-trained knowledge take into account the parameters of all previous layers as well as the input data, while pre-trained weights only contain information from the current layer. Moreover, we find that the effective ranks of input activations are much smaller than those of pre-trained weights. Thus, the null space of activations is more accurate and contains less pre-trained knowledge information compared to that of weights. Based on these, we introduce LoRA-Null, our proposed method that initializes LoRA in the null space of activations. Experimental results show that LoRA-Null effectively preserves the pre-trained world knowledge of LLMs while achieving good fine-tuning performance, as evidenced by extensive experiments.

Downloads

Published

2026-03-14

How to Cite

Tang, P., Hu, X., Liu, Y., Ding, L., Zhang, D., Wu, X., & Zhang, D. (2026). Put the Space of LoRA Initialization to the Extreme to Preserve Pre-trained Knowledge. Proceedings of the AAAI Conference on Artificial Intelligence, 40(39), 33232–33240. https://doi.org/10.1609/aaai.v40i39.40608

Issue

Section

AAAI Technical Track on Natural Language Processing IV