Skip to content

Model import

Peter Major edited this page Jul 21, 2023 · 3 revisions

Introduction

Note: this page is just a simplified version of the Microsoft Olive Stable Diffusion ONNX conversion guide

To generate images with Unpaint, you will need to install a Stable Diffusion model.

Most models on the Internet are provided in .ckpt and .safetensors files, however these are not directly usable in Unpaint, but have to be converted into the .onnx format first. This guide walks you through the process of doing that.

Some related questions:

Prerequisites

Install the following - default settings are OK for this tutorial.

There are a number of ways to deploy Python on Windows, including: installing it from the Microsoft Store, downloading it from Python.org, or using Anaconda/Miniconda. We will describe the process with Miniconda.

Preparation

  • Start anaconda prompt from the Start Menu
  • Create a working folder for the model conversion
mkdir C:\olive-sd
cd C:\olive-sd
  • Create a conda environment and activate it
conda create -n olive-env python=3.8
conda activate olive-env
  • Install pip
conda install pip
  • If you do not have git, you can also install it
conda install git
  • Install Microsoft Olive
git clone https://github.com/microsoft/olive --branch v0.2.1
cd olive/examples/directml/stable_diffusion

pip install olive-ai[directml]==0.2.1
pip install -r requirements.txt

Converting a model

Follow the following steps to convert a model to the ONNX format expected by Unpaint:

  • Find a model you would like to convert, note down its name, for example: stable diffusion 1.5
  • Go to HuggingFace.co and search for the same name in the search field, and then open the result you find most promising, e.g. https://huggingface.co/runwayml/stable-diffusion-v1-5
  • Note down the username / repository part, in the above case it is: runwayml/stable-diffusion-v1-5
  • Open and activate the conda environment as described above, then go to the olive/examples/directml/stable_diffusion directory.
  • Execute the following command: python stable_diffusion.py --optimize --model_id runwayml/stable-diffusion-v1-5
  • Wait patiently as the conversion will take some time
  • Once the process completes, the output will be placed into the following directory: models\optimized\<user>\<repository> so in this case it will be models\optimized\runwayml\stable-diffusion-v1-5

Tip: many models do not have VAE included and the system falls back to the original VAE shipped with Stable Diffusion, this is however known to exhibit blurrier output and artifacts (such as blue spots) on many of the generated images. To avoid this you may use this updated version of the VAE. To do this this, you may clone the target model, overwrite the contents of the vae_decoder folder with the above VAE, and run the above process on the local directory by specifying .\your_model_here as the model id.

ControlNet support

ControlNet allows guide the image generation using an input image - also called a condition image. You can read more about the topic here.

As a model author to use ControlNet you need two things:

  • A ControlNet enabled StableDiffusion model which can generate the latent image.
  • A ControlNet model which uses a condition image to drive StableDiffusion, these come in many forms depending on the input they accept, such as depth, edge, normals, OpenPose skeletons etc.

The below sections describe how to convert these respectively to ONNX.

Creating ControlNet enabled Stable Diffusion models

To make your model ControlNet enabled it is necessary to use a different input configuration for the U-net model, so the output of the ControlNet model can be pipelined into Stable Diffusion.

To do this:

  • add my fork of Olive as a remote to your existing olive repository
  • checkout the unpaint branch
  • go to the folder \examples\directml\stable_diffusion folder
  • activate your conda environment as described above
  • open config_unet_controlnet.json, and specify the model you would like to convert at model_path
  • run convert_controlnet.py
  • find the output model.onnx in newest subfolder of .\cache\models\
  • create a directory named controlnet in your model directory, and put the model.onnx in it

Now you can use ControlNet with your model.

Converting ControlNet models to ONNX

Please note that currently ControlNet models are always loaded by Unpaint from this repository. If you convert a new model and want to load it, you will need to edit the source code of Unpaint, or use the Axodox-MachineLearning library directly in your app.

The process is almost the same as above, except:

  • you need to edit convert_controlnet.py and change the submodel_name to controlnet
  • you use the config_controlnet.json to specify the model to convert

You can then manually copy the resulting model into controlnet folder of Unpaint (you may open the current project directory in file explorer, then go two levels up and find it). You will need to edit this file to expose the model on Unpaint UI.

Importing a converted model into Unpaint

Open the Model Library in Unpaint and press the Import model from disk option, then select the output directory generated above.

Troubleshooting

If your converted model fails to work:

  • Check our reference conversion, by supplying axodoxian/stable_diffusion_onnx on the model library page of Unpaint. If this model works, you can rule out hardware compatibility and software issues.
  • Compare the file and directory layout of your model to the reference.
  • Using Netron, you can compare the input and output configuration to the reference model.

If all fails, you can raise a question on the discussions page.

Sharing the model

If you have a model converted and want use it on other computers, you may share it with others on HuggingFace.co. To do this create a repo and then upload your converted model. The directory and file names should remain the same, e.g. the vae_decoder directory should be placed in the root of your repository.

Once this is done, you can use the Import model from HuggingFace option and specify the model ID as user/repository corresponding to your model.