LoRA in LoRA: Towards Parameter-Efficient Architecture Expansion for Continual Visual Instruction Tuning

Authors

  • Chang Che Hefei University of Technology
  • Ziqi Wang Hefei University of Technology
  • Pengwan Yang University of Amsterdam
  • Cheems Wang Tsinghua University
  • Hui Ma Hefei University of Technology
  • Zenglin Shi Hefei University of Technology

DOI:

https://doi.org/10.1609/aaai.v40i24.39082

Abstract

Continual Visual Instruction Tuning (CVIT) enables Multimodal Large Language Models (MLLMs) to incrementally learn new tasks over time. However, this process is challenged by catastrophic forgetting, where performance on previously learned tasks deteriorates as the model adapts to new ones. A common approach to mitigate forgetting is architecture expansion, which introduces task-specific modules to prevent interference. Yet, existing methods often expand entire layers for each task, leading to significant parameter overhead and poor scalability. To overcome these issues, we introduce LoRA in LoRA (LiLoRA), a highly efficient architecture expansion method tailored for CVIT in MLLMs. LiLoRA shares the LoRA matrix A across tasks to reduce redundancy, applies an additional low-rank decomposition to matrix B to minimize task-specific parameters, and incorporates a cosine-regularized stability loss to preserve consistency in shared representations over time. Extensive experiments on a diverse CVIT benchmark show that LiLoRA consistently achieves superior performance in sequential task learning while significantly improving parameter efficiency compared to existing approaches.

Published

2026-03-14

How to Cite

Che, C., Wang, Z., Yang, P., Wang, C., Ma, H., & Shi, Z. (2026). LoRA in LoRA: Towards Parameter-Efficient Architecture Expansion for Continual Visual Instruction Tuning. Proceedings of the AAAI Conference on Artificial Intelligence, 40(24), 19978-19986. https://doi.org/10.1609/aaai.v40i24.39082

Issue

Section

AAAI Technical Track on Machine Learning I