-
-
Notifications
You must be signed in to change notification settings - Fork 66
Running Containerized (Docker)
Since Enfugue requires a GPU for effective operation, your host machine must have a GPU and be able to host GPU-accelerated Docker containers. At the current moment, this is only available using the Nvidia Container Toolkit, available for Linux machines. You must install this toolkit and then restart your the Docker daemon before you can launch Enfugue with GPU acceleration.
The containerized version includes TensorRT support.
The container is available directly off the Github Container Repository, it can be pulled like this:
docker pull ghcr.io/painebenjamin/app.enfugue.ai:latest
To check if the container is working and can communicate with your GPU, you can run the version
command in the container.
docker run --rm --gpus all --runtime nvidia ghcr.io/painebenjamin/app.enfugue.ai:latest version
This is the expected result:
Enfugue v.0.2.1
Torch v.1.13.1+cu117
AI/ML Capabilities:
---------------------
Device type: cuda
CUDA: Ready
TensorRT: Ready
DirectML: Unavailable
MPS: Unavailable
The basic run command is:
docker run --rm --gpus all --runtime nvidia -v ${{ YOUR CACHE DIRECTORY }}:/home/enfugue/.cache -p 45555:45555 ghcr.io/painebenjamin/app.enfugue.ai:latest run
What does this command do?
- Passes the
run
command to thedocker
command line tool. - Includes
--rm
to remove any other running ENFUGUE container. - Includes
--gpus all
to let the docker container see attached graphics cards. - Includes
--runtime nvidia
to force Docker to use the Nvidia (GPU-capable) runtime. - Includes
-v ${{ YOUR_CACHE_DIRECTORY }}:/home/enfugue/.cache
to mount the passed directory to ENFUGUEScache
directory, which is the parent directory where Enfugue looks for files, downloads checkpoints, etc. The directories can also be changed in the UI as needed to anything else. This is not necessary, but is recommended if you intend to use many different AI models. - Includes
-p 45555:45555
to bind the local port45555
to the container port45555
, ENFUGUE'sHTTP
listening port. - Uses the image
ghcr.io/painebenjamin/app.enfugue.ai:latest
- Issues the
run
command to the image's executable entrypoint.
You'll then be able to access to UI at http://localhost:45555. See below for information on changing the port or domain.
See Configuration for Advanced Users on how to use configuration files. You will need to ensure any configuration file passed can be read by the Docker container - this guide uses environment variables for ease-of-use.
Use a combination of ENFUGUE_SERVER_SECURE
, ENFUGUE_SERVER_DOMAIN
and ENFUGUE_SERVER_PORT
to control how Enfugue assembles URLs and listens to requests.
- Set
ENFUGUE_SERVER_DOMAIN
to your desired domain name or IP address. - Set
ENFUGUE_SERVER_PORT
to the desired port to listen to, or default to 45555 for HTTP. - Set
ENFUGUE_SERVER_SECURE
to 1/True/Yes to use HTTPS, or anything else for HTTP.- If you want to use HTTPS and aren't using
app.enfugue.ai
, you'll need to provide your own certificate withENFUGUE_SERVER_CERT
and key withENFUGUE_SERVER_KEY
. You can also provide your own chain withENFUGUE_SERVER_CHAIN
.
- If you want to use HTTPS and aren't using