diff --git a/README.md b/README.md index fc7e2a1a731..90faa2358fa 100644 --- a/README.md +++ b/README.md @@ -132,7 +132,6 @@ Quickstart examples: - [Quickstart (JAX)](https://github.com/adap/flower/tree/main/examples/quickstart-jax) - [Quickstart (MONAI)](https://github.com/adap/flower/tree/main/examples/quickstart-monai) - [Quickstart (scikit-learn)](https://github.com/adap/flower/tree/main/examples/sklearn-logreg-mnist) -- [Quickstart (XGBoost)](https://github.com/adap/flower/tree/main/examples/xgboost-quickstart) - [Quickstart (Android [TFLite])](https://github.com/adap/flower/tree/main/examples/android) - [Quickstart (iOS [CoreML])](https://github.com/adap/flower/tree/main/examples/ios) - [Quickstart (MLX)](https://github.com/adap/flower/tree/main/examples/quickstart-mlx) @@ -144,8 +143,8 @@ Other [examples](https://github.com/adap/flower/tree/main/examples): - [PyTorch: From Centralized to Federated](https://github.com/adap/flower/tree/main/examples/pytorch-from-centralized-to-federated) - [Vertical FL](https://github.com/adap/flower/tree/main/examples/vertical-fl) - [Federated Finetuning of OpenAI's Whisper](https://github.com/adap/flower/tree/main/examples/whisper-federated-finetuning) +- [Federated Finetuning of Large Language Model](https://github.com/adap/flower/tree/main/examples/fedllm-finetune) - [Federated Finetuning of a Vision Transformer](https://github.com/adap/flower/tree/main/examples/vit-finetune) -- [Comprehensive XGBoost](https://github.com/adap/flower/tree/main/examples/xgboost-comprehensive) - [Advanced Flower with TensorFlow/Keras](https://github.com/adap/flower/tree/main/examples/advanced-tensorflow) - [Advanced Flower with PyTorch](https://github.com/adap/flower/tree/main/examples/advanced-pytorch) - Single-Machine Simulation of Federated Learning Systems ([PyTorch](https://github.com/adap/flower/tree/main/examples/simulation-pytorch)) ([Tensorflow](https://github.com/adap/flower/tree/main/examples/simulation-tensorflow)) diff --git a/examples/fedllm-finetune/README.md b/examples/fedllm-finetune/README.md index 108fdcf687b..7b71e928b2b 100644 --- a/examples/fedllm-finetune/README.md +++ b/examples/fedllm-finetune/README.md @@ -28,6 +28,7 @@ This will create a new directory called `fedllm-finetune` containing the followi -- dataset.py <- Dataset and tokenizer build -- utils.py <- Utility functions -- test.py <- Test pre-trained model +-- app.py <- ServerApp/ClientApp for Flower-Next -- conf/config.yaml <- Configuration file -- requirements.txt <- Example dependencies ``` @@ -42,8 +43,6 @@ pip install -r requirements.txt ## Run LLM Fine-tuning -### Run with `start_simulation()` - With an activated Python environment, run the example with default config values. The config is in `conf/config.yaml` and is loaded automatically. ```bash @@ -61,31 +60,6 @@ python main.py model.name="openlm-research/open_llama_7b_v2" model.quantization= python main.py num_rounds=50 fraction_fit.fraction_fit=0.25 ``` -### Run with Flower Next (demo) - -We conduct a 2-client setting to demonstrate how to run federated LLM fine-tuning with Flower Next. -Please follow the steps below: - -1. Start the long-running Flower server (SuperLink) - ```bash - flower-superlink --insecure - ``` -2. Start the long-running Flower client (SuperNode) - ```bash - # In a new terminal window, start the first long-running Flower client: - flower-client-app app:client1 --insecure - ``` - ```bash - # In another new terminal window, start the second long-running Flower client: - flower-client-app app:client2 --insecure - ``` -3. Run the Flower App - ```bash - # With both the long-running server (SuperLink) and two clients (SuperNode) up and running, - # we can now run the actual Flower App: - flower-server-app app:server --insecure - ``` - ## Expected Results ![](_static/train_loss_smooth.png) @@ -138,3 +112,28 @@ Finally, end your day with a relaxing visit to the London Eye, the tallest Ferri The [`Vicuna`](https://huggingface.co/lmsys/vicuna-13b-v1.1) template we used in this example is for a chat assistant. The generated answer is expected to be a multi-turn conversations. Feel free to try more interesting questions! + +## Run with Flower Next (preview) + +We conduct a 2-client setting to demonstrate how to run federated LLM fine-tuning with Flower Next. +Please follow the steps below: + +1. Start the long-running Flower server (SuperLink) + ```bash + flower-superlink --insecure + ``` +2. Start the long-running Flower client (SuperNode) + ```bash + # In a new terminal window, start the first long-running Flower client: + flower-client-app app:client1 --insecure + ``` + ```bash + # In another new terminal window, start the second long-running Flower client: + flower-client-app app:client2 --insecure + ``` +3. Run the Flower App + ```bash + # With both the long-running server (SuperLink) and two clients (SuperNode) up and running, + # we can now run the actual Flower App: + flower-server-app app:server --insecure + ```