Skip to content
Divided by Zer0 edited this page Jun 6, 2024 · 4 revisions

Hosting a model on Horde using the excellent Aphrodite engine is the most efficient way to serve text-generation models on the AI Horde. However as Aphrodite doesn't have a horde-bridge built-in (yet) we need to install two pieces of software which work in tandem

These instructions are for linux. If you're on Windows, feel free to provide the necessary instructions

The installation is fairly straightforward. With a terminal open, go to a folder where you'd like Aphrodite installed, then

python -m venv venv
source venv/bin/activate
python -m pip install -U aphrodite-engine --extra-index-url https://downloads.pygmalion.chat

That's it. Aphrodite is now installed. You can run it with the relevant parameters which will automatically download the model you specify and serve it on the port expected by the KoboldAI bridge

The below example will serve the Llama-3-8B instruct model on a RTX 4090

aphrodite run Undi95/Meta-Llama-3-8B-Instruct-hf --launch-kobold-api --served-model-name meta-llama/Meta-Llama-3-8B-Instruct --max-model-len 4096 --max-length 4096 --port 5000 --gpu-memory-utilization 0.85

And this will serve Codestral

aphrodite run bullerwins/Codestral-22B-v0.1-exl2_6.0bpw --launch-kobold-api --served-model-name mistralai/Codestral-22B-v0.1 --max-length 4096 --max-model-len 4096 --port 5000 -gmu 0.80

The next step is to install the AI Horde Worker. To do so, follow the instructions provided here: https://github.com/Haidra-Org/AI-Horde-Worker?tab=readme-ov-file#installing

Once you fill in your bridgeData.yaml and point it to the Aphrodite-engine port, it should start fulfilling jobs.

That's it!

Clone this wiki locally