@zexili/flwr-nlp
6
0
6
0
flwr new @zexili/flwr-nlpFlowerTune LLM (General NLP) ā Federated Instruction Tuning
This Flower app performs federated instruction tuning for a pretrained LLM on the General NLP task of the FlowerTune LLM leaderboard.
- Dataset (default): vicgalle/alpaca-gpt4 via Flower Datasets
- Fine-tuning: LoRA (š¤ PEFT)
- Orchestration: Flower Simulation Engine
- Aggregation: FedAvg
All app settings are configured in pyproject.toml under [tool.flwr.app.config].
Quickstart
1) Create environment & install
conda create -n flwr-tune python=3.9 conda activate flwr-tune pip install -e .
2) Run
flwr run
Tip: If the base model requires gated access on Hugging Face, make sure you have access and have authenticated with hf auth login.
Project structure
flowertune-llm-general-nlp/
āā flwr_nlp/
ā āā client_app.py
ā āā server_app.py
ā āā dataset.py
ā āā models.py
ā āā strategy.py
āā pyproject.toml
āā README.md
Notes
- This repository is intentionally minimal for leaderboard submission: it contains only the federated fine-tuning app code and required configuration.
- Logs, checkpoints, and downstream evaluation code are excluded.