HALoRA: Low-Rank Adaptation with Hierarchical Budget Allocation for Efficient Vision-Language Alignment

Authors

  • Letian Zhang Tsinghua University
  • GuangHao Meng Tsinghua University
  • XuDong Ren Tsinghua University
  • Jinpeng Wang Harbin Institute of Technology, Shenzhen

DOI:

https://doi.org/10.1609/aaai.v40i33.40056

Abstract

With the emergence of large multimodal models, dual-encoder alignment via contrastive learning has seen a resurgence. However, the escalating model size demands effective Parameter-Efficient Fine-Tuning (PEFT). While LoRA is a promising inference-free alternative to adapters, we find that its naive application to multimodal tasks causes a severe rank imbalance, favoring the text modality and FFN layers. To address this, we propose HALoRA (Hierarchical Allocation LoRA), which introduces a component-wise budget allocator to ensure balanced fine-tuning across both modalities and their internal components. This is complemented by a gradient-approximated initialization to accelerate convergence. With only half the parameters of adapters, HALoRA achieves superior or competitive performance in retrieval and zero-shot classification. Our work presents a more principled approach to multimodal LoRA, uncovering an intriguing asymmetry in vision-language alignment.

Published

2026-03-14

How to Cite

Zhang, L., Meng, G., Ren, X., & Wang, J. (2026). HALoRA: Low-Rank Adaptation with Hierarchical Budget Allocation for Efficient Vision-Language Alignment. Proceedings of the AAAI Conference on Artificial Intelligence, 40(33), 28283-28291. https://doi.org/10.1609/aaai.v40i33.40056

Issue

Section

AAAI Technical Track on Machine Learning X