Skip to content

Releases: bigscience-workshop/petals

v2.2.0: Falcon, macOS support, and more

06 Sep 17:29
Compare
Choose a tag to compare

Highlights

🦅 Falcon support. Petals now supports all models based on Falcon, including Falcon 180B released today. We improved the 🤗 Transformers FalconModel implementation to be up to 40% faster on recent GPUs. Our chatbot app runs Falcon 180B-Chat at ~2 tokens/sec.

Falcon-40B is licensed under Apache 2.0, so you can load it by specifying tiiuae/falcon-40b or tiiuae/falcon-40b-instruct as the model name. Falcon-180B is licensed under a custom license, and it is not clear if we can provide a Python interface for inference and fine-tuning of this model. Right now, it is only available in the chatbot app, and we are waiting for further clarifications from TII on this issue.

🍏 Native macOS support. You can run Petals clients and servers on macOS natively - just install Homebrew and run these commands:

brew install python
python3 -m pip install git+https://github.com/bigscience-workshop/petals
python3 -m petals.cli.run_server petals-team/StableBeluga2

If your computer has Apple M1/M2 chip, the Petals server will use the integrated GPU automatically. We recommend to only host Llama-based models, since other supported architectures do not work efficiently on M1/M2 chips yet. We also recommend using Python 3.10+ on macOS (installed by Homebrew automatically).

🔌 Serving custom models. Custom models now automatically show up at https://health.petals.dev as "not officially supported" models. As a reminder, you are not limited to models available at https://health.petals.dev and can run a server hosting any model based on BLOOM, Llama, or Falcon architecture (given that it's allowed by the model license), or even add a support for a new architecture yourself. We also improved Petals compatibility with some popular Llama-based models (e.g., models from NousResearch) in this release.

🐞 Bug fixes. This release also fixes inference of prefix-tuned models, which was broken in Petals 2.1.0.

What's Changed

Full Changelog: v2.1.0...v2.2.0

v2.1.0: 🤗 .generate(), faster loading, responsive inference, and more

24 Aug 16:42
18e93af
Compare
Choose a tag to compare

Highlights

🔌 Compatibility with 🤗 Transformers generation utils. Petals models now directly use 🤗 Transformers .generate() implementation instead of custom generation code. This means that you can use a variety of generation methods and constraints implemented in 🤗 Transformers (e.g., repetition_penalty, beam search, etc.) and expect an exact match between Petals and a model running locally.

Most common methods are compatible with reusing inference sessions, so that you can run .generate() multiple times without reprocessing the dialogue history from scratch:

with model.inference_session(max_length=100):
    outputs1 = model.generate(user_prompt1, repetition_penalty=1.2)
    outputs2 = model.generate(user_prompt2, repetition_penalty=1.2)

Faster loading of Stable Beluga 2. We repacked Stable Beluga 2, the most popular model at the moment, to increase its loading speed and minimize RAM and disk space requirements. The repacked version can be loaded from the petals-team/StableBeluga2 repository and is fully compatible with clients and servers using the standard repository (stabilityai/StableBeluga2).

Now, clients need to download only 1.05 GB of data to run Stable Beluga 2 (instead of ~20 GB needed before) and require only 4 GB of RAM (instead of ~20 GB required before). Servers need to download and store 2x less data and load the model from disk significantly faster. If you're switching from the old repository, don't forget to remove the old cache in the~/.cache/petals/models--stabilityai--StableBeluga2 directory to save disk space.

⏱️ More responsive inference. In older versions, servers could become unresponsive for a few seconds while processing large prefixes (thousands of tokens) on inference. This release allows to perform small inference requests (a few tokens) in the middle of processing a large request, thus avoiding freezes during token-by-token inference caused by someone processing a large prefix.

🔒 Minor improvements. This release adds support for loading weights in the safetensors format on servers and adds the blocked_servers client option to avoid a given set of servers:

from petals import AutoDistributedModelForCausalLM

blocked_servers = ["12D3KooWA6g...", "12D3KooWGyD..."]  # Full peer IDs from https://health.petals.dev
model = AutoDistributedModelForCausalLM.from_pretrained(model_name, blocked_servers=blocked_servers)

🐞 Bug fixes. This release also includes a variety of bug fixes allowing to speed up the chatbot app and fine-tuning, better bypass recently disconnect servers, improve rebalancing algorithm and usability of benchmarks, fix throughput measurements and installation on ARM CPUs.

We also fixed Petals compatibility with the latest releases of 🤗 Transformers, Accelerate, and PEFT libraries.

Breaking changes

📖 Default inference sessions. If you run .generate() or forward passes inside an .inference_session() context, they now use the opened session by default. These snippets are now equivalent:

# Using default session
with model.inference_session(max_length=100):
    output_ids = model.generate(input_ids, max_new_tokens=3)

# Explicitly specifying a session
with model.inference_session(max_length=100) as sess:
    output_ids = model.generate(input_ids, max_new_tokens=3, session=sess)

Earlier, the 1st snippet was creating a new session, which confused most people and lead to bugs.

➡️ Renaming. We renamed SequenceManagerConfig to petals.ClientConfig and petals.dht_utils to petals.utils.dht. The old names now lead to DeprecationWarnings and will be removed in Petals 2.2.0+.

What's Changed

New Contributors

Full Changelog: v2.0.1...v2.1.0

v2.0.1: Inference of longer sequences, Python 3.11 support, bug fixes

23 Jul 14:54
f3fafd1
Compare
Choose a tag to compare

Highlights

🛣️ Inference of longer sequences. We extended the max sequence length to 8192 tokens for Llama 2 and added chunking to avoid server out-of-memory errors (happened when processing long prefixes). This became possible thanks to multi-query attention used in Llama 2, which uses 8x less GPU memory for attention caches. Now you can process longer sequences using a Petals client and have dialogues of up to 8192 tokens at https://chat.petals.dev

🐍 Python 3.11 support. Petals clients and servers now work on Python 3.11.

🐞 Bug fixes. We fixed the server's --token argument (used to provide your 🤗 Model Hub access token for loading Llama 2), possible deadlocks in the server, issues with fine-tuning speed (servers available via relays are deprioritized) and other minor load balancing issues.

🪟 Running server on Windows. We made a better guide for running a server in WSL (Windows Subsystem for Linux).

📦 Running server on Runpod. We added a guide for using a Petals template on Runpod.

What's Changed

Full Changelog: v2.0.0.post1...v2.0.1

v2.0.0: LLaMA 1 and 2, Guanaco, 4-bit, shortest-path routing, direct server-to-server communication

19 Jul 18:29
b1ff8bd
Compare
Choose a tag to compare

We're excited to announce Petals 2.0.0 — the largest Petals release to date!

Highlights

🦙 Support for LLaMA and LLaMA 2. We've added support for inference and fine-tuning of any models based on 🤗 Transformers LlamaModel, including all variants of LLaMA and LLaMA 2 — one of the strongest open source models available today. The public swarm hosts the largest variants of these models, LLaMA-65B and LLaMA 2 (70B and 70B-Chat), providing inference at the speed of up to 5-6 tokens/sec.

🗜️ 4-bit quantization. We've integrated efficient 4-bit (NF4) quantization from the recent "QLoRA: Efficient Finetuning of Quantized LLMs" paper. This allows to use ~40% less GPU memory (thus, ~40% less servers) to fit all model blocks and have ~2x speedup for token-by-token inference, compared to the 8-bit quantization we previously used, with relatively small quality loss.

🔌 Pre-loading LoRA adapters, such as Guanaco. We added an opportunity to pre-load LoRA adapters compatible with the 🤗 PEFT library, which may add extra functionality to the model you host. You can do this using the --adapters argument on the server (e.g., --adapters repo1/adapter1 repo2/adapter2). These adapters are activated at a client's request - specifically, the client may specify .from_pretrained(..., active_adapter="repo1/adapter1") when loading a distributed model. One example of this is Guanaco - an instruction-finetuned adapter for LLaMA that turns it into a helpful chatbot that carefully follows user's instructions. You can try using LLaMA with this adapter in our chatbot app.

➡️ Direct server-to-server communication. Previously, servers didn't send tensors to each other directly due to specifics of our fault-tolerant inference algorithm. This update changes that, which saves round-trip time between servers and a client and leads to substantial speedups for clients located far away from servers they're using.

🛣️ Shortest-path routing for inference. Previously, a client didn't properly choose geographically close and fast servers, so the client could choose a slow inference chain, especially if the swarm has many servers located for away from it. Now, the client builds a full graph of client-server and server-server latencies, as well as server inference speeds, to find the fastest chain of servers for inference among all possible ones. It also considers the amount of GPU memory left for attention caches, so that we don't choose a close server that doesn't actually have memory for our request.

🌎 Loading models directly from 🤗 Model Hub and Auto classes. Starting from Petals 2.0.0, models do not need to be converted to a special format to be hosted by Petals. Instead, both clients and servers can load models directly from 🤗 Model Hub, fetching only the shards they need to host their part of the model. Furthermore, you can write code supporting multiple architectures at once using Auto classes, such as AutoDistributedConfig.from_pretrained(...) and AutoDistributedModelForCausalLM.from_pretrained(...). The guide for adding new model architectures to Petals also became much simpler due to generalizing Petals code to multiple architectures and the absence of the model conversion step.

🏋️ Fine-tuning examples. We've switched most examples to LLaMA-65B and fixed previously reported bugs. In particular, the "Getting started" notebook now includes a simple example of deep prompt tuning on a dummy task, and the sequence classification notebook uses LLaMA-65B and improved hyperparameters for a stable training.

🖥️ Upgraded swarm monitor. The swarm monitor now contains much more info about the server, including pre-loaded LoRA adapters, detailed performance info, latencies to potential next servers, and so on. All these info is published to DHT, so you don't need to ping each server to fetch it. We've also added a "Contributor" column, so that contributors hosting 10+ blocks get a chance to publish their name, advertise their company or a social media account in exchange to hosting a server for Petals. A name (or a link) shown there may be specified using the server's --public_name argument.

What's Changed

Read more

v1.1.5: Faster fine-tuning, bug fixes, and more

09 May 23:03
675bacb
Compare
Choose a tag to compare

Highlights

⏱ Faster fine-tuning. Fine-tuning uses ~2x less traffic (tensors are now sent in bfloat16 by default) and builds routes using a heuristic maximizing the swarm's throughput. This should address timeout errors that could happen during fine-tuning.

🐞 Bug fixes. On servers, this release fixes out-of-memory errors and freezing network throughput evals. On clients, it fixes issues with slicing RemoteSequential and silently ignoring unsupported .generate() kwargs. Also, this release fixes warnings originated from hivemind.p2p and hivemind.compression.

🛣️ Updated throughput formula. We have updated the throughput formula to reflect that servers hosting many blocks still run forward and backward passes through only one block at a time. Don't be surprised if your throughput became smaller than in 1.1.4 — these numbers are not directly comparable!

🖼️ Improved lower-level interfaces. We have refactored lower-level interfaces, such as RemoteSequential and RemoteSequenceManager, to be more reliable (e.g. when doing retries) and much easier to use. Some rarely used low-level functions in petals.dht_utils were removed.

What's Changed

  • Fix OOMs happening in case of accelerate >= 0.16.0 by @borzunov in #310
  • Refactor RemoteSequenceManager by @borzunov in #309
  • Update hivemind to 1.1.8, enable efficient bfloat16 encoding by @borzunov in #311
  • Replace .make_sequence(..., mode="random") with mode="max_throughput" by @borzunov in #313
  • Divide compute throughput by average no. of used blocks by @borzunov in #314
  • Raise error for unexpected .generate() kwargs by @borzunov in #315
  • Abort speedtest if it runs too long by @borzunov in #316
  • Bump version to 1.1.5 by @borzunov in #312

Full Changelog: v1.1.4...v1.1.5

v1.1.4: Extended GPU support, faster startup, and more

21 Apr 02:26
93c4eba
Compare
Choose a tag to compare

Highlights

🗝️ 8-bit servers support more GPUs. A bitsandbytes update brings 8-bit support to older generations of NVIDIA GPUs, as well as the GeForce 16 GPU series (e.g. 1660 Ti). Please try Petals 1.1.4 if you previously had errors like Your GPU does not support Int8 Matmul! and cublasLt ran into an error! on some GPUs. This version also loads weights in 8-bit by default when tensor parallelism is enabled.

⏱️ Servers start faster. Servers take ~2x less time to load block weights from the disk cache to the GPU memory. The next release will also reduce the time it takes to download the weights from the Internet, since they will be downloaded in 8-bit instead of 16-bit.

🧵 Multi-threaded clients work faster. Earlier, multi-threaded clients were actually performing only one network request at a time due to a bug in hivemind. This bug was recently fixed in hivemind. This significantly improves the speed of the chat.petals.ml app when multiple users chat concurrently.

⏱️ Clients start faster. Clients take ~10% less time to load the model, since they build a route through remote servers in parallel with loading the local part of the model (input/output embeddings).

🌳 Relaxed dependency requirements. We relaxed version requirements for transformers and other huggingface libraries, so you can update them independently of Petals. In particular, Petals works with PyTorch 2.0 and the latest transformers release. Also, we fixed a bug where the client loaded a model in float32 by default (instead of bfloat16/float16) in some transformers releases. Please try Petals 1.1.4 if you previously had out-of-memory errors when running the client.

What's Changed

Full Changelog: v1.1.3...v1.1.4

v1.1.3: Bug fixes

01 Mar 09:15
Compare
Choose a tag to compare

Highlights

🐞 Bug fixes. We have fixed a variety of minor issues related to timeout errors in the client, fine-tuning, and tensor parallelism.

⚙️ New options in the client. Added allowed_servers and max_retries options:

  • allowed_servers allows to restrict the set of servers a client can use for its requests (e.g., to only use the servers trusted to process your data).
  • max_retries allows to limit the number of retries a client does before raising an exception (previously, clients continued retrying indefinitely).

📚 FAQ. We have released the FAQ page that covers common questions about running clients and servers, as well as troubleshooting common problems.

What's Changed

Full Changelog: v1.1.2...v1.1.3

v1.1.2: Faster inference, new model, and more

30 Jan 20:38
b03efb1
Compare
Choose a tag to compare

Highlights

🏃‍♀️ Faster inference. We've shipped server-side changes improving the inference speed by up to 30%. This is a result of profiling the server's inference performance (see details in #224 and #225). The public swarm will become faster once everyone upgrades to the latest Petals version and restarts their servers.

🐞 Prompt-tuning bug fixes. We've shipped bug fixes for prompt-tuning notebooks (see details in #231).

🧑‍🏫 New pretrained model. We've added a new model, BLOOMZ-176B by BigScience, to the public swarm. You can run it (or host its blocks) by specifying bigscience/bloomz-petals as the model name.

  • BLOOMZ is a version of BLOOM fine-tuned to follow human instructions in the zero-shot regime. See details in its model card and paper.
  • The chatbot app now uses BLOOMZ by default. You can ask it to generate texts, code, or perform various tasks. It responds better than the regular BLOOM, which often went off-topic instead of actually doing the task you asked.

What's Changed

New Contributors

Full Changelog: v1.1.1...v1.1.2

v1.1.1: More stable and fast

13 Jan 21:41
cea83d3
Compare
Choose a tag to compare

Highlights

⛰️ Stability. This release improves stability and performance of the Petals DHT in presence of many servers joined via NAT traversal & relays. Now, the DHT prefers to store the keys on directly reachable peers, so that all peers can access them faster and with less failures. Also, this release contains a minor fix to the block reassignment algorithm that decreases excess reassignments that were leading to the swarm downtime in the past.

🌎 Basic routing. We have improved the routing algorithm for inference, so that clients weakly prefer servers holding more blocks to minimize latency and increase inference speed. This is only a basic algorithm, and we are working on smarter routing (taking into account latency, throughput, etc.) for both inference and fine-tuning in future releases. This release also makes the servers share more technical information about themselves (their version, free cache, etc.), so it can be used by the smarter routing algorithms in future and shown at http://health.petals.ml for debugging purposes.

What's Changed

  • Fix fine-tuning notebooks intros by @borzunov in #194
  • Ignore network RPS if we failed to measure it by @borzunov in #198
  • Make client ignore blacklist if all servers holding a block are blacklisted by @borzunov in #197
  • Increase tolerances in test_tp_block by @justheuristic in #196
  • Fix --no_auto_relay help by @borzunov in #199
  • Use length-weighted sampling in routing for inference by @justheuristic in #204
  • Return available cache size in rpc_info() by @justheuristic in #191
  • Add service checking direct reachability from peers by @justheuristic in #195
  • Report server version and dht.client_mode in rpc_info(), check for updates on startup by @borzunov in #209
  • Don't switch blocks if it makes swarm disjoint by @borzunov in #210
  • Fix output shape when resuming generation by @borzunov in #211
  • Improve errors in case of missing blocks, suggest to join your own server by @borzunov in #212
  • CI: Convert model only when convert_model.py or setup.cfg change by @borzunov in #213
  • CI: Update deprecated actions, don't measure network RPS by @borzunov in #215
  • Bump version to 1.1.1 by @borzunov in #214

Full Changelog: v1.1.0...v1.1.1

v1.1.0: NAT traversal, relays, and more

10 Jan 11:53
82c9f93
Compare
Choose a tag to compare

Highlights

🏠 NAT traversal & relays. Now, servers can join the swarm automatically even if your machine is located behind a NAT or a firewall, or has a dynamic IP address. You don't have to manually set up port forwarding or provide any arguments to make it work.

  • Please upgrade the Petals package and restart all your servers & clients to use this feature or access servers joined via relays:

    pip install --upgrade petals

  • How does it work? If the server learns that it can't accept incoming connections due to NAT/firewall, it opens a long-term outcoming connection to one of relay nodes, then the relay node forwards all requests to this server through this connection. In turn, any server with a public IP may serve as a relay node if necessary. We use libp2p circuit relays under the hood: https://docs.libp2p.io/concepts/nat/circuit-relay/

💬 Chatbot app. We've released a chatbot app working over Petals: http://chat.petals.ml (source code).

  • Disclaimer: This chatbot uses the regular BLOOM, which is not fine-tuned for question answering. Please do not expect it to behave like ChatGPT.

  • How does it work? Under the hood, this web app uses our HTTP endpoint for running inference using the public Petals swarm. You can use this endpoint for your own projects, or set up another endpoint yourself (no GPU needed). See API docs here: https://github.com/borzunov/chat.petals.ml#http-api-methods

🏃‍♀️ Faster CPU-only clients. If your CPU supports the AVX512 instruction set, a CPU-only client now runs almost as fast as a GPU-enabled one. This way, you can rent cheap CPU instances to run the client or an HTTP endpoint, like the one we use for the chatbot app.

  • How to use it? AVX512 is mostly present on late Intel Xeon CPUs. You can rent one by choosing a "dedicated CPU" instance with 16+ GB RAM on DigitalOcean.

🏥 Swarm health monitor. We've updated the swarm health monitor: http://health.petals.ml (source code). It provides an overview of servers who joined the public swarm and reports any connection issues.

What's Changed

New Contributors

Full Changelog: v1.0.0...v1.1.0