@zexili/flwr-nlp

6
0
flwr new @zexili/flwr-nlp

FlowerTune LLM (General NLP) — Federated Instruction Tuning

This Flower app performs federated instruction tuning for a pretrained LLM on the General NLP task of the FlowerTune LLM leaderboard.

  • Dataset (default): vicgalle/alpaca-gpt4 via Flower Datasets
  • Fine-tuning: LoRA (šŸ¤— PEFT)
  • Orchestration: Flower Simulation Engine
  • Aggregation: FedAvg

All app settings are configured in pyproject.toml under [tool.flwr.app.config].

Quickstart

1) Create environment & install

conda create -n flwr-tune python=3.9
conda activate flwr-tune

pip install -e .

2) Run

flwr run

Tip: If the base model requires gated access on Hugging Face, make sure you have access and have authenticated with hf auth login.

Project structure

flowertune-llm-general-nlp/
ā”œā”€ flwr_nlp/
│  ā”œā”€ client_app.py
│  ā”œā”€ server_app.py
│  ā”œā”€ dataset.py
│  ā”œā”€ models.py
│  └─ strategy.py
ā”œā”€ pyproject.toml
└─ README.md

Notes

  • This repository is intentionally minimal for leaderboard submission: it contains only the federated fine-tuning app code and required configuration.
  • Logs, checkpoints, and downstream evaluation code are excluded.