Low-Rank Curvature for Zeroth-Order Optimization in LLM Fine-tuning
DOI:
https://doi.org/10.1609/aaai.v40i30.39715Abstract
We introduce LOREN, a curvature-aware zeroth-order (ZO) optimization method for fine-tuning large language models (LLMs). Existing ZO methods, which estimate gradients via finite differences using random perturbations, often suffer from high variance and suboptimal search directions. Our approach addresses these challenges by: (i) reformulating the problem of gradient preconditioning as that of adaptively estimating an anisotropic perturbation distribution for gradient estimation, (ii) capturing curvature through a low-rank block diagonal preconditioner using the framework of natural evolution strategies, and (iii) applying a REINFORCE leave-one-out (RLOO) gradient estimator to reduce variance. Experiments on standard LLM benchmarks show that our method outperforms state-of-the-art ZO methods by achieving higher accuracy and faster convergence, while cutting peak memory usage by up to 27.3% compared with MeZO-Adam.Published
2026-03-14
How to Cite
Seung, H., Lee, J., & Ko, H. (2026). Low-Rank Curvature for Zeroth-Order Optimization in LLM Fine-tuning. Proceedings of the AAAI Conference on Artificial Intelligence, 40(30), 25235–25242. https://doi.org/10.1609/aaai.v40i30.39715
Issue
Section
AAAI Technical Track on Machine Learning VII