Skip to content

Commit

Permalink
Update Readme for fedllm-finetune example (#3068)
Browse files Browse the repository at this point in the history
Co-authored-by: jafermarq <javier@flower.ai>
  • Loading branch information
yan-gao-GY and jafermarq authored Mar 6, 2024
1 parent b121f12 commit 409803d
Show file tree
Hide file tree
Showing 2 changed files with 27 additions and 29 deletions.
3 changes: 1 addition & 2 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -132,7 +132,6 @@ Quickstart examples:
- [Quickstart (JAX)](https://github.com/adap/flower/tree/main/examples/quickstart-jax)
- [Quickstart (MONAI)](https://github.com/adap/flower/tree/main/examples/quickstart-monai)
- [Quickstart (scikit-learn)](https://github.com/adap/flower/tree/main/examples/sklearn-logreg-mnist)
- [Quickstart (XGBoost)](https://github.com/adap/flower/tree/main/examples/xgboost-quickstart)
- [Quickstart (Android [TFLite])](https://github.com/adap/flower/tree/main/examples/android)
- [Quickstart (iOS [CoreML])](https://github.com/adap/flower/tree/main/examples/ios)
- [Quickstart (MLX)](https://github.com/adap/flower/tree/main/examples/quickstart-mlx)
Expand All @@ -144,8 +143,8 @@ Other [examples](https://github.com/adap/flower/tree/main/examples):
- [PyTorch: From Centralized to Federated](https://github.com/adap/flower/tree/main/examples/pytorch-from-centralized-to-federated)
- [Vertical FL](https://github.com/adap/flower/tree/main/examples/vertical-fl)
- [Federated Finetuning of OpenAI's Whisper](https://github.com/adap/flower/tree/main/examples/whisper-federated-finetuning)
- [Federated Finetuning of Large Language Model](https://github.com/adap/flower/tree/main/examples/fedllm-finetune)
- [Federated Finetuning of a Vision Transformer](https://github.com/adap/flower/tree/main/examples/vit-finetune)
- [Comprehensive XGBoost](https://github.com/adap/flower/tree/main/examples/xgboost-comprehensive)
- [Advanced Flower with TensorFlow/Keras](https://github.com/adap/flower/tree/main/examples/advanced-tensorflow)
- [Advanced Flower with PyTorch](https://github.com/adap/flower/tree/main/examples/advanced-pytorch)
- Single-Machine Simulation of Federated Learning Systems ([PyTorch](https://github.com/adap/flower/tree/main/examples/simulation-pytorch)) ([Tensorflow](https://github.com/adap/flower/tree/main/examples/simulation-tensorflow))
Expand Down
53 changes: 26 additions & 27 deletions examples/fedllm-finetune/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -28,6 +28,7 @@ This will create a new directory called `fedllm-finetune` containing the followi
-- dataset.py <- Dataset and tokenizer build
-- utils.py <- Utility functions
-- test.py <- Test pre-trained model
-- app.py <- ServerApp/ClientApp for Flower-Next
-- conf/config.yaml <- Configuration file
-- requirements.txt <- Example dependencies
```
Expand All @@ -42,8 +43,6 @@ pip install -r requirements.txt

## Run LLM Fine-tuning

### Run with `start_simulation()`

With an activated Python environment, run the example with default config values. The config is in `conf/config.yaml` and is loaded automatically.

```bash
Expand All @@ -61,31 +60,6 @@ python main.py model.name="openlm-research/open_llama_7b_v2" model.quantization=
python main.py num_rounds=50 fraction_fit.fraction_fit=0.25
```

### Run with Flower Next (demo)

We conduct a 2-client setting to demonstrate how to run federated LLM fine-tuning with Flower Next.
Please follow the steps below:

1. Start the long-running Flower server (SuperLink)
```bash
flower-superlink --insecure
```
2. Start the long-running Flower client (SuperNode)
```bash
# In a new terminal window, start the first long-running Flower client:
flower-client-app app:client1 --insecure
```
```bash
# In another new terminal window, start the second long-running Flower client:
flower-client-app app:client2 --insecure
```
3. Run the Flower App
```bash
# With both the long-running server (SuperLink) and two clients (SuperNode) up and running,
# we can now run the actual Flower App:
flower-server-app app:server --insecure
```

## Expected Results

![](_static/train_loss_smooth.png)
Expand Down Expand Up @@ -138,3 +112,28 @@ Finally, end your day with a relaxing visit to the London Eye, the tallest Ferri

The [`Vicuna`](https://huggingface.co/lmsys/vicuna-13b-v1.1) template we used in this example is for a chat assistant.
The generated answer is expected to be a multi-turn conversations. Feel free to try more interesting questions!

## Run with Flower Next (preview)

We conduct a 2-client setting to demonstrate how to run federated LLM fine-tuning with Flower Next.
Please follow the steps below:

1. Start the long-running Flower server (SuperLink)
```bash
flower-superlink --insecure
```
2. Start the long-running Flower client (SuperNode)
```bash
# In a new terminal window, start the first long-running Flower client:
flower-client-app app:client1 --insecure
```
```bash
# In another new terminal window, start the second long-running Flower client:
flower-client-app app:client2 --insecure
```
3. Run the Flower App
```bash
# With both the long-running server (SuperLink) and two clients (SuperNode) up and running,
# we can now run the actual Flower App:
flower-server-app app:server --insecure
```

0 comments on commit 409803d

Please sign in to comment.