FRoD: Full-Rank Efficient Fine-Tuning with Rotational Degrees for Fast Convergence
DOI:
https://doi.org/10.1609/aaai.v40i31.39813Abstract
Parameter-efficient fine-tuning (PEFT) methods have emerged as a practical solution for adapting large foundation models to downstream tasks, reducing computational and memory costs by updating only a small subset of parameters. Among them, approaches like LoRA aim to strike a balance between efficiency and expressiveness, but often suffer from slow convergence and limited adaptation capacity due to their inherent low-rank constraints. This trade-off hampers the ability of PEFT methods to capture complex patterns needed for diverse tasks. To address these challenges, we propose FRoD, a novel fine-tuning method that combines hierarchical joint decomposition with rotational degrees of freedom. By extracting a globally shared basis across layers and injecting sparse, learnable perturbations into scaling factors for flexible full-rank updates, FRoD enhances expressiveness and efficiency, leading to faster and more robust convergence. On 20 benchmarks spanning vision, reasoning, and language understanding, FRoD matches full model fine-tuning in accuracy, while using only 1.72% of trainable parameters under identical training budgets.Published
2026-03-14
How to Cite
Wan, G., Chen, T., Feng, F., Zhou, H., & Xu, R. (2026). FRoD: Full-Rank Efficient Fine-Tuning with Rotational Degrees for Fast Convergence. Proceedings of the AAAI Conference on Artificial Intelligence, 40(31), 26107–26114. https://doi.org/10.1609/aaai.v40i31.39813
Issue
Section
AAAI Technical Track on Machine Learning VIII