cant generate #15
Replies: 3 comments 2 replies
-
Hello! Thank for very much for the report. This appears to be an issue with the way Enfugue is reading that particular model. Using
It looks like there are some reports on that model of an issue with the VAE state dictionary (which is exactly what's happening here.) I'm guessing you've used this checkpoint successfully in other applications, though - so my guess is that other applications have used their own VAE in such instances when the models' is malformed. Very sorry about that - I will investigate how I can make this checkpoint work. Thank you for the report! |
Beta Was this translation helpful? Give feedback.
-
Thank you very much, I'll downgrade the version to the one without the VAE
for now.
While I've got you here, any plans to make Lora's, TI's and Lycoris to be
integrated into prompts without having to create individual models each
time?
…On Wed, Jun 28, 2023, 2:21 PM painebenjamin ***@***.***> wrote:
Hello! Thank for very much for the report.
This appears to be an issue with the way Enfugue is reading that
particular model. Using taurealMix_v37Fp16prunedVae.safetensors
reproduces the issue for me:
2023-06-28 20:15:19,434 [enfugue] ERROR (engine.py:263) Traceback (most recent call last):
File "C:\cygwin64\home\paine\projects\enfugue\src\python\enfugue\diffusion\process.py", line 328, in run
response["result"] = self.execute_diffusion_plan(
File "C:\cygwin64\home\paine\projects\enfugue\src\python\enfugue\diffusion\process.py", line 114, in execute_diffusion_plan
return plan.execute(
File "C:\cygwin64\home\paine\projects\enfugue\src\python\enfugue\diffusion\plan.py", line 587, in execute
images, nsfw = self.execute_nodes(
File "C:\cygwin64\home\paine\projects\enfugue\src\python\enfugue\diffusion\plan.py", line 743, in execute_nodes
self.prepare_pipeline(pipeline)
File "C:\cygwin64\home\paine\projects\enfugue\src\python\enfugue\diffusion\plan.py", line 712, in prepare_pipeline
pipeline.model = self.model
File "C:\cygwin64\home\paine\projects\enfugue\src\python\enfugue\diffusion\manager.py", line 699, in model
new_model = self.check_convert_checkpoint(new_model)
File "C:\cygwin64\home\paine\projects\enfugue\src\python\enfugue\diffusion\manager.py", line 1132, in check_convert_checkpoint
pipe = download_from_original_stable_diffusion_ckpt(
File "C:\Users\paine\anaconda3\envs\enfugue\lib\site-packages\diffusers\pipelines\stable_diffusion\convert_from_ckpt.py", line 1185, in download_from_original_stable_diffusion_ckpt
converted_vae_checkpoint = convert_ldm_vae_checkpoint(checkpoint, vae_config)
File "C:\Users\paine\anaconda3\envs\enfugue\lib\site-packages\diffusers\pipelines\stable_diffusion\convert_from_ckpt.py", line 580, in convert_ldm_vae_checkpoint
new_checkpoint["encoder.conv_in.weight"] = vae_state_dict["encoder.conv_in.weight"]
KeyError: 'encoder.conv_in.weight'
It looks like there are some reports on that model
<https://civitai.com/models/63323/taureal-mix> of an issue with the VAE
state dictionary (which is exactly what's happening here.)
I'm guessing you've used this checkpoint successfully in other
applications, though - so my guess is that other applications have used
their own VAE in such instances when the models' is malformed.
Very sorry about that - I will investigate how I can make this checkpoint
work. Thank you for the report!
—
Reply to this email directly, view it on GitHub
<#15 (comment)>,
or unsubscribe
<https://github.com/notifications/unsubscribe-auth/BA5F42PZ3UXDZCRSYAE5ODDXNR73LANCNFSM6AAAAAAZXOXURA>
.
You are receiving this because you authored the thread.Message ID:
***@***.***
com>
|
Beta Was this translation helpful? Give feedback.
-
I only ask because im on a 2060 and cant use TRT, and ive only used
Automatic1111 up until now. So I'm wondering if that could be a toggleable
feature. If not, thats well within your rights, i just may not see much use
from Enfugue if thats the case.
…On Wed, Jun 28, 2023 at 9:23 PM painebenjamin ***@***.***> wrote:
I'm hesitant to say no because that would be very useful for people *not*
using TensorRT, but I'm hesitant to say yes because that'll break anyone
using TensorRT, so this would only be a feature for those not using it.
If it turns out that I'm the only one excited about TRT and people don't
really care about it, though, it might make more sense for me to focus the
design more on those *not* using it.
Thank you for your feedback!
—
Reply to this email directly, view it on GitHub
<#15 (reply in thread)>,
or unsubscribe
<https://github.com/notifications/unsubscribe-auth/BA5F42NZMOXBNUIMOV4GFB3XNTRKDANCNFSM6AAAAAAZXOXURA>
.
You are receiving this because you authored the thread.Message ID:
***@***.***
com>
|
Beta Was this translation helpful? Give feedback.
-
I dont know what ive done wrong, but I get an error each time it tries to generate.
"'encoder.conv_in.weight'"
i have not tweaked any default settings
Beta Was this translation helpful? Give feedback.
All reactions