We released a survey paper "A Survey on Federated Fine-tuning of Large Language Models". Feel free to cite or open pull requests.
- Awesome-Federated-LLM-Learning
- Fedra: A random allocation strategy for federated tuning to unleash the power of heterogeneous clients. [Paper]
- Towards building the federatedGPT: Federated instruction tuning.[Paper]
- Communication-Efficient and Tensorized Federated Fine-Tuning of Large Language Models. [Paper]
- Selective Aggregation for Low-Rank Adaptation in Federated Learning. [Paper]
- Federa: Efficient fine-tuning of language models in federated learning leveraging weight decomposition. [Paper]
- LoRA-FAIR: Federated LoRA Fine-Tuning with Aggregation and Initialization Refinement. [Paper]
- Federated LoRA with Sparse Communication. [Paper]
- SA-FedLora: Adaptive Parameter Allocation for Efficient Federated Learning with LoRA Tuning. [Paper]
- SLoRA: Federated parameter efficient fine-tuning of language models. [Paper]
- FederatedScope-LLM: A Comprehensive Package for Fine-tuning Large Language Models in Federated Learning [Paper]
- Robust Federated Finetuning of Foundation Models via Alternating Minimization of LoRA. [Paper]
- Automated federated pipeline for parameter-efficient fine-tuning of large language models. [Paper]
- Low-Parameter Federated Learning with Large Language Models. [Paper]
- Towards Robust and Efficient Federated Low-Rank Adaptation with Heterogeneous Clients. [Paper]
- FedRA: A Random Allocation Strategy for Federated Tuning to Unleash the Power of Heterogeneous Clients. [Paper]
- Fed-piLot: Optimizing LoRA Assignment for Efficient Federated Foundation Model Fine-Tuning. [Paper]
- Heterogeneous lora for federated fine-tuning of on-device foundation models. [Paper]
- Flora: Federated fine-tuning large language models with heterogeneous low-rank adaptations. [Paper]
- Federated fine-tuning of large language models under heterogeneous tasks and client resources. [Paper]
- Federated LLMs Fine-tuned with Adaptive Importance-Aware LoRA. [Paper]
- Towards Federated Low-Rank Adaptation of Language Models with Rank Heterogeneity. [Paper]
- Fedhm: Efficient federated learning for heterogeneous models via low-rank factorization. [Paper]
- RBLA: Rank-Based-LoRA-Aggregation for Fine-Tuning Heterogeneous Models. [Paper]
- FDLoRA: Personalized Federated Learning of Large Language Model via Dual LoRA Tuning. [Paper]
- Fedlora: Model-heterogeneous personalized federated learning with lora tuning. [Paper]
- FedLoRA: When Personalized Federated Learning Meets Low-Rank Adaptation. [Paper]
- Dual-Personalizing Adapter for Federated Foundation Models. [Paper]
- Personalized Federated Instruction Tuning via Neural Architecture Search. [Paper]
- Communication-Efficient Personalized Federated Learning for Speech-to-Text Tasks. [Paper]
- Personalized Federated Fine-Tuning for LLMs via Data-Driven Heterogeneous Model Architectures. [Paper]
- Prompt federated learning for weather forecasting: Toward foundation models on meteorological data. [Paper]
- Promptfl: Let federated participants cooperatively learn prompts instead of models-federated learning in age of foundation model. [Paper]
- Fedbpt: Efficient federated black-box prompt tuning for large language models. [Paper]
- Federated learning of large language models with parameter-efficient prompt tuning and adaptive optimization. [Paper]
- Efficient federated prompt tuning for black-box large pre-trained models. [Paper]
- Text-driven prompt generation for vision-language models in federated learning. [Paper]
- Learning federated visual prompt in null space for mri reconstruction. [Paper]
- Fed-cprompt: Contrastive prompt for rehearsal-free federated continual learning. [Paper]
- Fedprompt: Communication-efficient and privacy-preserving prompt tuning in federated learning. [Paper]
- Tunable soft prompts are messengers in federated learning. [Paper]
- Hepco: Data-free heterogeneous prompt consolidation for continual federated learning. [Paper]
- Prompt-enhanced Federated Learning for Aspect-Based Sentiment Analysis. [Paper]
- Towards practical few-shot federated nlp. [Paper]
- Federated prompting and chain-of-thought reasoning for improving llms answering. [Paper]
- FedHPL: Efficient Heterogeneous Federated Learning with Prompt Tuning and Logit Distillation. [Paper]
- Probabilistic Federated Prompt-Tuning with Non-IID and Imbalanced Data. [Paper]
- Federated Class-Incremental Learning with Prompting. [Paper]
- Explore and Cure: Unveiling Sample Effectiveness with Context-Aware Federated Prompt Tuning. [Paper]
- Federated Prompt Learning for Weather Foundation Models on Devices. [Paper]
- Efficient model personalization in federated learning via client-specific prompt generation. [Paper]
- Unlocking the potential of prompt-tuning in bridging generalized and personalized federated learning. [Paper]
- Pfedprompt: Learning personalized prompt for vision-language models in federated learning. [Paper]
- Global and local prompts cooperation via optimal transport for federated learning. [Paper]
- Visual prompt based personalized federated learning. [Paper]
- Personalized federated continual learning via multi-granularity prompt. [Paper]
- FedLPPA: Learning Personalized Prompt and Aggregation for Federated Weakly-supervised Medical Image Segmentation. [Paper]
- Harmonizing Generalization and Personalization in Federated Prompt Learning. [Paper]
- Tackling Feature-Classifier Mismatch in Federated Learning via Prompt-Driven Feature Transformation. [Paper]
- Personalized Federated Learning for Text Classification with Gradient-Free Prompt Tuning. [Paper]
- Mixture of Experts Made Personalized: Federated Prompt Learning for Vision-Language Models. [Paper]
- CP 2 GFed: Cross-granular and Personalized Prompt-based Green Federated Tuning for Giant Models. [Paper]
- DiPrompT: Disentangled Prompt Tuning for Multiple Latent Domain Generalization in Federated Learning. [Paper]
- Prompt-enhanced Federated Content Representation Learning for Cross-domain Recommendation. [Paper]
- Dual prompt tuning for domain-aware federated learning. [Paper]
- Federated adaptive prompt tuning for multi-domain collaborative learning. [Paper]
- Breaking physical and linguistic borders: Multilingual federated prompt tuning for low-resource languages. [Paper]
- Federated Domain Generalization via Prompt Learning and Aggregation. [Paper]
- CP-Prompt: Composition-Based Cross-modal Prompting for Domain-Incremental Continual Learning. [Paper]
- Efficient federated learning for modern nlp. [Paper]
- Efficient federated learning with pre-trained large language model using several adapter mechanisms. [Paper]
- Client-customized adaptation for parameter-efficient federated learning. [Paper]
- Fedclip: Fast generalization and personalization for clip in federated learning. [Paper]
- Communication efficient federated learning for multilingual neural machine translation with adapter. [Paper]
- Adapter-based Selective Knowledge Distillation for Federated Multi-domain Meeting Summarization. [Paper]
- Feddat: An approach for foundation model finetuning in multi-modal heterogeneous federated learning. [Paper]
- Differentially private bias-term only fine-tuning of foundation models. [Paper]
- Conquering the communication constraints to enable large pre-trained models in federated learning. [Paper]
- Bridging the gap between foundation models and heterogeneous federated learning. [Paper]
- Exploring Selective Layer Fine-Tuning in Federated Learning. [Paper]
- Federated full-parameter tuning of billion-sized language models with communication cost under 18 kilobytes. [Paper]
- ${$FwdLLM$}$: Efficient Federated Finetuning of Large Language Models with Perturbed Inferences. [Paper]
- ZooPFL: Exploring black-box foundation models for personalized federated learning. [Paper]
- On the convergence of zeroth-order federated tuning for large language models. [Paper]
- Thinking Forward: Memory-Efficient Federated Finetuning of Language Models. [Paper]
- Communication-Efficient Byzantine-Resilient Federated Zero-Order Optimization. [Paper]
- FedBERT: When federated learning meets pre-training. [Paper]
- Federated split bert for heterogeneous text classification. [Paper]
- FedSplitX: Federated Split Learning for Computationally-Constrained Heterogeneous Clients. [Paper]
- Fedbiot: Llm local fine-tuning in federated learning without full model. [Paper]
- Federated Data-Efficient Instruction Tuning for Large Language Models. [Paper]
If you find this work useful, welcome to cite us.
@misc{wu2025surveyfederatedfinetuninglarge,
title={A Survey on Federated Fine-tuning of Large Language Models},
author={Yebo Wu and Chunlin Tian and Jingguang Li and He Sun and Kahou Tam and Li Li and Chengzhong Xu},
year={2025},
eprint={2503.12016},
archivePrefix={arXiv},
primaryClass={cs.LG},
url={https://arxiv.org/abs/2503.12016},
}