Skip to content

ENFUGUE Web UI v0.1.3

Compare
Choose a tag to compare
@painebenjamin painebenjamin released this 06 Jul 04:11
· 745 commits to main since this release
16d42c2

Thank You!

Thanks again to everyone who has helped test Enfugue so far. I'm happy to release the third alpha package, which comes with more bug fixes, some hotly requested features, and improved stability and robustness.

Installation

Standalone

First-Time Installation

Linux

First, decide which version you want - with or without TensorRT support. TensorRT requires a powerful, modern Nvidia GPU.
Then, download the appropriate manylinux files here (3 for TensorRT, 2 for base,) place them in their own folder, concatenate them and extract them. A simple console command to do that is:

cat enfugue-server-0.1.3*.part | tar -xvz

Windows

Download the win64 files here, and extract them using a program which allows extracting from multiple archives such as 7-Zip.

If you are using 7-Zip, you should not extract both files independently. If they are in the same directory when you unzip the first, 7-Zip will automatically unzip the second. The second file cannot be extracted on its own.

Upgrading

To upgrade either distribution, download and extract the appropriate upgrade package on this release. Copy all files in the upgrade package into your Enfugue installation directory, overwriting any existing files.

Provided Conda Environments

First-Time Installation

To install with the provided Conda environments, you need to install a version of Conda.

After installing Conda and configuring it so it is available to your shell or command-line, download one of the environment files depending on your platform and graphics API.

  1. First, choose windows- or linux- based on your platform.
  2. Then, choose your graphics API:
    • If you have a powerful next-generation Nvidia GPU (3000 series and better with at least 12 GB of VRAM), use tensorrt for all of the capabilities of cuda with the added ability to compile TensorRT engines.
    • If you have any other Nvidia GPU or other CUDA-compatible device, select cuda.
    • Additional graphics APIs (rocm, mps, and directml) are being added and will be available soon.

Finally, using the file you downloaded, create your Conda environment:

conda env create -f <downloaded_file.yml>

You've now installed Enfugue and all dependencies. To run it, activate the environment and then run the installed binary.

conda activate enfugue
enfugue run

Upgrading

To upgrade with the provided environment, use pip like so:

conda activate enfugue
pip install enfugue --ugprade

Self-Managed Environment

First-Time Installation

pip install enfugue

If you are on Linux and want TensorRT support, execute:

pip install enfugue[tensorrt]

If you are on Windows and want TensorRT support, follow the steps detailed here.

Upgrading from 0.1.x

pip install enfugue --upgrade

New Features

  • Portable distributions are now released in both a standalone package as well as an upgrade package. When using an upgrade package, copy the contents of the folder over your previous installation, overwriting any existing files.
  • Added a Linux portable distribution without TensorRT support for a smaller download size (approximately 1.4GB smaller.)
  • The Model Picker has been given the ability to directly select checkpoints in addition to the previous method of selecting from preconfigured models. When selecting a checkpoint, an additional set of inputs will be presented that allow the user to enter LoRA and Textual Inversions.
  • For portable distributions, added a task to automatically open a browser window to the app once the server has become responsive, to avoid confusion over the format of the interface. This can be disabled with configuration.
  • For Windows portable distribution, added a context menu item to the icon displayed in the bottom-right that opens a browser window to the app when clicked.

Issues Fixed

  • Fixed an issue whereby the base Stable Diffusion checkpoint would be downloaded even when not required.
  • Fixed an issue where invocations were failing to pick up after having refreshed the page. You should now properly return to the current execution if you refresh the page while the engine is still diffusing.
  • Fixed an issue with the File > Save dialog not working.

Changes

  • Altered language around Models - changed to 'Model Configurations' to avoid confusion between Checkpoints and sets of configuration.
  • Drastically reduced VRAM usage of initial checkpoint load by offloading to CPU.
  • Reduced the size of the Windows and Linux TensorRT installations by ~400MB.
  • Added automated, isolated builds to reduce dependency issues going forward.