FedALT: Federated Fine-Tuning Through Adaptive Local Training with Rest-of-World LoRA

Authors

  • Jieming Bian University of Florida
  • Lei Wang University of Florida
  • Letian Zhang Middle Tennessee State University
  • Jie Xu University of Florida

DOI:

https://doi.org/10.1609/aaai.v40i24.39054

Abstract

Fine-tuning large language models (LLMs) in federated settings enables privacy-preserving adaptation but suffers from cross-client interference due to model aggregation. Existing federated LoRA fine-tuning methods, primarily based on FedAvg, struggle with data heterogeneity, leading to harmful cross-client interference and suboptimal personalization. In this work, we propose FedALT, a novel personalized federated LoRA fine-tuning algorithm that fundamentally departs from FedAvg. Instead of using an aggregated model to initialize local training, each client continues training its individual LoRA while incorporating shared knowledge through a separate Rest-of-World (RoW) LoRA component. To effectively balance local adaptation and global information, FedALT introduces an adaptive mixer that dynamically learns input-specific weightings between the individual and RoW LoRA components, drawing conceptual foundations from the Mixture-of-Experts (MoE) paradigm. Through extensive experiments on NLP benchmarks, we demonstrate that FedALT significantly outperforms state-of-the-art personalized federated LoRA fine-tuning methods, achieving superior local adaptation without sacrificing computational efficiency.

Downloads

Published

2026-03-14

How to Cite

Bian, J., Wang, L., Zhang, L., & Xu, J. (2026). FedALT: Federated Fine-Tuning Through Adaptive Local Training with Rest-of-World LoRA. Proceedings of the AAAI Conference on Artificial Intelligence, 40(24), 19728-19736. https://doi.org/10.1609/aaai.v40i24.39054

Issue

Section

AAAI Technical Track on Machine Learning I