Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

The size of tensor a (640) must match the size of tensor b (320) at non-singleton #43

Open
BlaiseRodrigues opened this issue Nov 21, 2024 · 16 comments

Comments

@BlaiseRodrigues
Copy link

Hi, on windows I am getting the following error when i use add brush

loading in lowvram mode 64.0 0%| | 0/20 [00:00<?, ?it/s]BrushNet inference, step = 0: image batch = 1, got 1 latents, starting from 0 BrushNet inference: sample torch.Size([1, 4, 112, 64]) , CL torch.Size([1, 5, 112, 64]) dtype torch.float16 C:\Users\Blaise\.conda\envs\MagicQuill\lib\site-packages\diffusers\models\resnet.py:323: FutureWarning: scaleis deprecated and will be removed in version 1.0.0. Thescaleargument is deprecated and will be ignored. Please remove it, as passing it will raise an error in the future.scaleshould directly be passed while calling the underlying pipeline component i.e., viacross_attention_kwargs. deprecate("scale", "1.0.0", deprecation_message) BrushNet can't find <class 'comfy.ops.disable_weight_init.Conv2d'> layer in 0 input block: None shape: 56, 112, 32, 64 0%| | 0/20 [00:04<?, ?it/s] Traceback (most recent call last): File "C:\Users\Blaise\.conda\envs\MagicQuill\lib\site-packages\gradio\queueing.py", line 624, in process_events response = await route_utils.call_process_api( File "C:\Users\Blaise\.conda\envs\MagicQuill\lib\site-packages\gradio\route_utils.py", line 323, in call_process_api output = await app.get_blocks().process_api( File "C:\Users\Blaise\.conda\envs\MagicQuill\lib\site-packages\gradio\blocks.py", line 2018, in process_api result = await self.call_function( File "C:\Users\Blaise\.conda\envs\MagicQuill\lib\site-packages\gradio\blocks.py", line 1567, in call_function prediction = await anyio.to_thread.run_sync( # type: ignore File "C:\Users\Blaise\.conda\envs\MagicQuill\lib\site-packages\anyio\to_thread.py", line 56, in run_sync return await get_async_backend().run_sync_in_worker_thread( File "C:\Users\Blaise\.conda\envs\MagicQuill\lib\site-packages\anyio\_backends\_asyncio.py", line 2441, in run_sync_in_worker_thread return await future File "C:\Users\Blaise\.conda\envs\MagicQuill\lib\site-packages\anyio\_backends\_asyncio.py", line 943, in run result = context.run(func, *args) File "C:\Users\Blaise\.conda\envs\MagicQuill\lib\site-packages\gradio\utils.py", line 846, in wrapper response = f(*args, **kwargs) File "I:\MagicQuill\gradio_run.py", line 152, in generate_image_handler res = generate( File "I:\MagicQuill\gradio_run.py", line 120, in generate latent_samples, final_image, lineart_output, color_output = scribbleColorEditModel.process( File "I:\MagicQuill\MagicQuill\scribble_color_edit.py", line 110, in process latent_samples = self.ksampler.sample( File "I:\MagicQuill\MagicQuill\comfyui_utils.py", line 154, in sample return self.common_ksampler(model, seed, steps, cfg, sampler_name, scheduler, positive, negative, latent_image, denoise=denoise) File "I:\MagicQuill\MagicQuill\comfyui_utils.py", line 146, in common_ksampler samples = comfy.sample.sample(model, noise, steps, cfg, sampler_name, scheduler, positive, negative, latent_image, File "I:\MagicQuill\MagicQuill\comfy\sample.py", line 43, in sample samples = sampler.sample(noise, positive, negative, cfg=cfg, latent_image=latent_image, start_step=start_step, last_step=last_step, force_full_denoise=force_full_denoise, denoise_mask=noise_mask, sigmas=sigmas, callback=callback, disable_pbar=disable_pbar, seed=seed) File "I:\MagicQuill\MagicQuill\comfy\samplers.py", line 794, in sample return sample(self.model, noise, positive, negative, cfg, self.device, sampler, sigmas, self.model_options, latent_image=latent_image, denoise_mask=denoise_mask, callback=callback, disable_pbar=disable_pbar, seed=seed) File "I:\MagicQuill\MagicQuill\model_patch.py", line 120, in modified_sample return cfg_guider.sample(noise, latent_image, sampler, sigmas, denoise_mask, callback, disable_pbar, seed) File "I:\MagicQuill\MagicQuill\comfy\samplers.py", line 683, in sample output = self.inner_sample(noise, latent_image, device, sampler, sigmas, denoise_mask, callback, disable_pbar, seed) File "I:\MagicQuill\MagicQuill\comfy\samplers.py", line 662, in inner_sample samples = sampler.sample(self, sigmas, extra_args, callback, noise, latent_image, denoise_mask, disable_pbar) File "I:\MagicQuill\MagicQuill\comfy\samplers.py", line 567, in sample samples = self.sampler_function(model_k, noise, sigmas, extra_args=extra_args, callback=k_callback, disable=disable_pbar, **self.extra_options) File "C:\Users\Blaise\.conda\envs\MagicQuill\lib\site-packages\torch\utils\_contextlib.py", line 115, in decorate_context return func(*args, **kwargs) File "I:\MagicQuill\MagicQuill\comfy\k_diffusion\sampling.py", line 159, in sample_euler_ancestral denoised = model(x, sigmas[i] * s_in, **extra_args) File "I:\MagicQuill\MagicQuill\comfy\samplers.py", line 291, in __call__ out = self.inner_model(x, sigma, model_options=model_options, seed=seed) File "I:\MagicQuill\MagicQuill\comfy\samplers.py", line 649, in __call__ return self.predict_noise(*args, **kwargs) File "I:\MagicQuill\MagicQuill\comfy\samplers.py", line 652, in predict_noise return sampling_function(self.inner_model, x, timestep, self.conds.get("negative", None), self.conds.get("positive", None), self.cfg, model_options=model_options, seed=seed) File "I:\MagicQuill\MagicQuill\comfy\samplers.py", line 277, in sampling_function out = calc_cond_batch(model, conds, x, timestep, model_options) File "I:\MagicQuill\MagicQuill\comfy\samplers.py", line 224, in calc_cond_batch output = model_options['model_function_wrapper'](model.apply_model, {"input": input_x, "timestep": timestep_, "c": c, "cond_or_uncond": cond_or_uncond}).chunk(batch_chunks) File "I:\MagicQuill\MagicQuill\model_patch.py", line 52, in brushnet_model_function_wrapper return apply_model_method(x, timestep, **options_dict['c']) File "I:\MagicQuill\MagicQuill\comfy\model_base.py", line 113, in apply_model model_output = self.diffusion_model(xc, t, context=context, control=control, transformer_options=transformer_options, **extra_conds).float() File "C:\Users\Blaise\.conda\envs\MagicQuill\lib\site-packages\torch\nn\modules\module.py", line 1518, in _wrapped_call_impl return self._call_impl(*args, **kwargs) File "C:\Users\Blaise\.conda\envs\MagicQuill\lib\site-packages\torch\nn\modules\module.py", line 1527, in _call_impl return forward_call(*args, **kwargs) File "I:\MagicQuill\MagicQuill\comfy\ldm\modules\diffusionmodules\openaimodel.py", line 852, in forward h = forward_timestep_embed(module, h, emb, context, transformer_options, time_context=time_context, num_video_frames=num_video_frames, image_only_indicator=image_only_indicator) File "I:\MagicQuill\MagicQuill\comfy\ldm\modules\diffusionmodules\openaimodel.py", line 44, in forward_timestep_embed x = layer(x, context, transformer_options) File "C:\Users\Blaise\.conda\envs\MagicQuill\lib\site-packages\torch\nn\modules\module.py", line 1518, in _wrapped_call_impl return self._call_impl(*args, **kwargs) File "C:\Users\Blaise\.conda\envs\MagicQuill\lib\site-packages\torch\nn\modules\module.py", line 1527, in _call_impl return forward_call(*args, **kwargs) File "I:\MagicQuill\MagicQuill\brushnet_nodes.py", line 1071, in forward_patched_by_brushnet h += to_add.to(h.dtype).to(h.device) RuntimeError: The size of tensor a (640) must match the size of tensor b (320) at non-singleton dimension 1

Would appreciate any feedback

@zliucz
Copy link
Collaborator

zliucz commented Nov 21, 2024

I haven't met this error before. Would you please show me the environment list by pip list? Let's see if there's any mismatch.

@BlaiseRodrigues
Copy link
Author

BlaiseRodrigues commented Nov 21, 2024

Here it is

(MagicQuill) PS I:\MagicQuill> pip list
Package            Version     Editable project location
------------------ ----------- ------------------------------
accelerate         0.33.0
aiofiles           23.2.1
annotated-types    0.7.0
anyio              4.6.2.post1
bitsandbytes       0.44.1
certifi            2024.8.30
charset-normalizer 3.4.0
click              8.1.7
colorama           0.4.6
diffusers          0.31.0
einops             0.6.1
einops-exts        0.0.4
exceptiongroup     1.2.2
fastapi            0.115.5
ffmpy              0.4.0
filelock           3.16.1
fsspec             2024.10.0
gradio             5.4.0
gradio_client      1.4.2
gradio_magicquill  0.0.1
h11                0.14.0
httpcore           0.17.3
httpx              0.24.1
huggingface-hub    0.26.2
idna               3.10
importlib_metadata 8.5.0
Jinja2             3.1.4
joblib             1.4.2
latex2mathml       3.77.0
llava              1.2.2.post1 I:\MagicQuill\MagicQuill\LLaVA
markdown-it-py     3.0.0
markdown2          2.5.1
MarkupSafe         2.1.5
mdurl              0.1.2
mpmath             1.3.0
networkx           3.4.2
numpy              1.26.4
opencv-python      4.10.0.84
orjson             3.10.11
packaging          24.2
pandas             2.2.3
peft               0.13.2
pillow             11.0.0
pip                24.2
protobuf           4.25.4
psutil             6.1.0
pydantic           2.10.0
pydantic_core      2.27.0
pydub              0.25.1
Pygments           2.18.0
python-dateutil    2.9.0.post0
python-multipart   0.0.12
pytz               2024.2
PyYAML             6.0.2
regex              2024.11.6
requests           2.32.3
rich               13.9.4
ruff               0.7.4
safehttpx          0.1.1
safetensors        0.4.5
scikit-learn       1.2.2
scipy              1.14.1
semantic-version   2.10.0
sentencepiece      0.2.0
setuptools         75.1.0
shellingham        1.5.4
shortuuid          1.0.13
six                1.16.0
sniffio            1.3.1
starlette          0.41.3
svgwrite           1.4.3
sympy              1.13.3
threadpoolctl      3.5.0
timm               0.6.13
tokenizers         0.15.1
tomlkit            0.12.0
torch              2.1.2+cu118
torchaudio         2.1.2+cu118
torchsde           0.2.6
torchvision        0.16.2
tqdm               4.67.0
trampoline         0.1.2
transformers       4.37.2
typer              0.13.1
typing_extensions  4.12.2
tzdata             2024.2
urllib3            2.2.3
uvicorn            0.32.1
wavedrom           2.0.3.post3
webcolors          1.13
websockets         12.0
wheel              0.44.0
zipp               3.21.0

@zliucz
Copy link
Collaborator

zliucz commented Nov 22, 2024

emm.. seems the torchvision is not compiled with cu118? Would you please re-install it with:

pip install torch==2.1.2 torchvision==0.16.2 --index-url https://download.pytorch.org/whl/cu118

Let me know if it works.

@BlaiseRodrigues
Copy link
Author

Unfortunately, getting the same error after re-installing

@tristan88888
Copy link

I installed torch in my conda environment using:

pip install torch==2.3.0+cu118 torchvision==0.18 torchaudio==2.3.0+cu118 --extra-index-url https://download.pytorch.org/whl/cu118

and MagicQuill works fine.

@Chrumps
Copy link

Chrumps commented Nov 25, 2024

The update to version 2.3.0 did not resolve the issue, and I still receive the message:
RuntimeError: The size of tensor a (640) must match the size of tensor b (320) at non-singleton dimension 1

torch              2.3.0+cu118
torchaudio         2.3.0+cu118
torchvision        0.18.0+cu118

@zliucz
Copy link
Collaborator

zliucz commented Nov 26, 2024

Hi. I've never encountered such an issue on our machines. Could you please compare the differences in the environment using Diffchecker and manually update to match our environment? Below are my environment:

(MagicQuill) C:\Users\zliucz\MagicQuill>pip list
Package            Version      Editable project location
------------------ ------------ ------------------------------------------
accelerate         0.33.0
aiofiles           23.2.1
annotated-types    0.7.0
anyio              4.6.2.post1
bitsandbytes       0.44.1
certifi            2022.12.7
charset-normalizer 2.1.1
click              8.1.7
colorama           0.4.6
diffusers          0.31.0
einops             0.6.1
einops-exts        0.0.4
exceptiongroup     1.2.2
fastapi            0.115.5
ffmpy              0.4.0
filelock           3.13.1
fsspec             2024.2.0
gradio             5.4.0
gradio_client      1.4.2
gradio_magicquill  0.0.1
h11                0.14.0
httpcore           0.17.3
httpx              0.24.1
huggingface-hub    0.26.2
idna               3.4
importlib_metadata 8.5.0
Jinja2             3.1.3
joblib             1.4.2
latex2mathml       3.77.0
llava              1.2.2.post1  C:\Users\antvi\MagicQuill\MagicQuill\LLaVA
markdown-it-py     3.0.0
markdown2          2.5.1
MarkupSafe         2.1.5
mdurl              0.1.2
mpmath             1.3.0
networkx           3.2.1
numpy              1.26.3
opencv-python      4.10.0.84
orjson             3.10.11
packaging          24.2
pandas             2.2.3
peft               0.13.2
pillow             10.2.0
pip                24.2
protobuf           4.25.4
psutil             6.1.0
pydantic           2.10.0
pydantic_core      2.27.0
pydub              0.25.1
Pygments           2.18.0
python-dateutil    2.9.0.post0
python-multipart   0.0.12
pytz               2024.2
PyYAML             6.0.2
regex              2024.11.6
requests           2.28.1
rich               13.9.4
ruff               0.7.4
safehttpx          0.1.1
safetensors        0.4.5
scikit-learn       1.2.2
scipy              1.14.1
semantic-version   2.10.0
sentencepiece      0.2.0
setuptools         75.1.0
shellingham        1.5.4
shortuuid          1.0.13
six                1.16.0
sniffio            1.3.1
starlette          0.41.3
svgwrite           1.4.3
sympy              1.13.1
threadpoolctl      3.5.0
timm               0.6.13
tokenizers         0.15.1
tomlkit            0.12.0
torch              2.1.2+cu118
torchsde           0.2.6
torchvision        0.16.2+cu118
tqdm               4.67.0
trampoline         0.1.2
transformers       4.37.2
typer              0.13.1
typing_extensions  4.12.2
tzdata             2024.2
urllib3            1.26.13
uvicorn            0.32.1
wavedrom           2.0.3.post3
webcolors          1.13
websockets         12.0
wheel              0.44.0
zipp               3.21.0

@MikeAiJF
Copy link

image
请问这是什么原因

@rafalfr
Copy link

rafalfr commented Nov 30, 2024

the same problem:

Apply color controlnet
model.device: cuda:0
Base model type: SD1.5
BrushNet image.shape = torch.Size([1, 640, 512, 3]) mask.shape = torch.Size([1, 640, 512])
Requested to load AutoencoderKL
Loading 1 new model
loading in lowvram mode 64.0
BrushNet CL: image_latents shape = torch.Size([1, 4, 80, 64]) interpolated_mask shape = torch.Size([1, 1, 80, 64])
Requested to load ControlNet
Requested to load ControlNet
Requested to load BaseModel
Loading 3 new models
loading in lowvram mode 64.0
loading in lowvram mode 64.0
loading in lowvram mode 64.0
0%| | 0/1 [00:00<?, ?it/s]BrushNet inference, step = 0: image batch = 1, got 1 latents, starting from 0
BrushNet inference: sample torch.Size([1, 4, 80, 64]) , CL torch.Size([1, 5, 80, 64]) dtype torch.float16
BrushNet can't find <class 'comfy.ops.disable_weight_init.Conv2d'> layer in 0 input block: None
0%| | 0/1 [00:10<?, ?it/s]
Traceback (most recent call last):
File "H:\conda\MagicQuill\lib\site-packages\gradio\queueing.py", line 624, in process_events
response = await route_utils.call_process_api(
File "H:\conda\MagicQuill\lib\site-packages\gradio\route_utils.py", line 323, in call_process_api
output = await app.get_blocks().process_api(
File "H:\conda\MagicQuill\lib\site-packages\gradio\blocks.py", line 2018, in process_api
result = await self.call_function(
File "H:\conda\MagicQuill\lib\site-packages\gradio\blocks.py", line 1567, in call_function
prediction = await anyio.to_thread.run_sync( # type: ignore
File "H:\conda\MagicQuill\lib\site-packages\anyio\to_thread.py", line 56, in run_sync
return await get_async_backend().run_sync_in_worker_thread(
File "H:\conda\MagicQuill\lib\site-packages\anyio_backends_asyncio.py", line 2441, in run_sync_in_worker_thread
return await future
File "H:\conda\MagicQuill\lib\site-packages\anyio_backends_asyncio.py", line 943, in run
result = context.run(func, *args)
File "H:\conda\MagicQuill\lib\site-packages\gradio\utils.py", line 846, in wrapper
response = f(*args, **kwargs)
File "H:\MagicQuill\gradio_run.py", line 152, in generate_image_handler
res = generate(
File "H:\MagicQuill\gradio_run.py", line 120, in generate
latent_samples, final_image, lineart_output, color_output = scribbleColorEditModel.process(
File "H:\MagicQuill\MagicQuill\scribble_color_edit.py", line 110, in process
latent_samples = self.ksampler.sample(
File "H:\MagicQuill\MagicQuill\comfyui_utils.py", line 154, in sample
return self.common_ksampler(model, seed, steps, cfg, sampler_name, scheduler, positive, negative, latent_image, denoise=denoise)
File "H:\MagicQuill\MagicQuill\comfyui_utils.py", line 146, in common_ksampler
samples = comfy.sample.sample(model, noise, steps, cfg, sampler_name, scheduler, positive, negative, latent_image,
File "H:\MagicQuill\MagicQuill\comfy\sample.py", line 43, in sample
samples = sampler.sample(noise, positive, negative, cfg=cfg, latent_image=latent_image, start_step=start_step, last_step=last_step, force_full_denoise=force_full_denoise, denoise_mask=noise_mask, sigmas=sigmas, callback=callback, disable_pbar=disable_pbar, seed=seed)
File "H:\MagicQuill\MagicQuill\comfy\samplers.py", line 794, in sample
return sample(self.model, noise, positive, negative, cfg, self.device, sampler, sigmas, self.model_options, latent_image=latent_image, denoise_mask=denoise_mask, callback=callback, disable_pbar=disable_pbar, seed=seed)
File "H:\MagicQuill\MagicQuill\model_patch.py", line 120, in modified_sample
return cfg_guider.sample(noise, latent_image, sampler, sigmas, denoise_mask, callback, disable_pbar, seed)
File "H:\MagicQuill\MagicQuill\comfy\samplers.py", line 683, in sample
output = self.inner_sample(noise, latent_image, device, sampler, sigmas, denoise_mask, callback, disable_pbar, seed)
File "H:\MagicQuill\MagicQuill\comfy\samplers.py", line 662, in inner_sample
samples = sampler.sample(self, sigmas, extra_args, callback, noise, latent_image, denoise_mask, disable_pbar)
File "H:\MagicQuill\MagicQuill\comfy\samplers.py", line 567, in sample
samples = self.sampler_function(model_k, noise, sigmas, extra_args=extra_args, callback=k_callback, disable=disable_pbar, **self.extra_options)
File "H:\conda\MagicQuill\lib\site-packages\torch\utils_contextlib.py", line 115, in decorate_context
return func(*args, **kwargs)
File "H:\MagicQuill\MagicQuill\comfy\k_diffusion\sampling.py", line 159, in sample_euler_ancestral
denoised = model(x, sigmas[i] * s_in, **extra_args)
File "H:\MagicQuill\MagicQuill\comfy\samplers.py", line 291, in call
out = self.inner_model(x, sigma, model_options=model_options, seed=seed)
File "H:\MagicQuill\MagicQuill\comfy\samplers.py", line 649, in call
return self.predict_noise(*args, **kwargs)
File "H:\MagicQuill\MagicQuill\comfy\samplers.py", line 652, in predict_noise
return sampling_function(self.inner_model, x, timestep, self.conds.get("negative", None), self.conds.get("positive", None), self.cfg, model_options=model_options, seed=seed)
File "H:\MagicQuill\MagicQuill\comfy\samplers.py", line 277, in sampling_function
out = calc_cond_batch(model, conds, x, timestep, model_options)
File "H:\MagicQuill\MagicQuill\comfy\samplers.py", line 224, in calc_cond_batch
output = model_options['model_function_wrapper'](model.apply_model, {"input": input_x, "timestep": timestep_, "c": c, "cond_or_uncond": cond_or_uncond}).chunk(batch_chunks)
File "H:\MagicQuill\MagicQuill\model_patch.py", line 52, in brushnet_model_function_wrapper
return apply_model_method(x, timestep, **options_dict['c'])
File "H:\MagicQuill\MagicQuill\comfy\model_base.py", line 113, in apply_model
model_output = self.diffusion_model(xc, t, context=context, control=control, transformer_options=transformer_options, **extra_conds).float()
File "H:\conda\MagicQuill\lib\site-packages\torch\nn\modules\module.py", line 1518, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File "H:\conda\MagicQuill\lib\site-packages\torch\nn\modules\module.py", line 1527, in _call_impl
return forward_call(*args, **kwargs)
File "H:\MagicQuill\MagicQuill\comfy\ldm\modules\diffusionmodules\openaimodel.py", line 852, in forward
h = forward_timestep_embed(module, h, emb, context, transformer_options, time_context=time_context, num_video_frames=num_video_frames, image_only_indicator=image_only_indicator)
File "H:\MagicQuill\MagicQuill\comfy\ldm\modules\diffusionmodules\openaimodel.py", line 44, in forward_timestep_embed
x = layer(x, context, transformer_options)
File "H:\conda\MagicQuill\lib\site-packages\torch\nn\modules\module.py", line 1518, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File "H:\conda\MagicQuill\lib\site-packages\torch\nn\modules\module.py", line 1527, in _call_impl
return forward_call(*args, **kwargs)
File "H:\MagicQuill\MagicQuill\brushnet_nodes.py", line 1070, in forward_patched_by_brushnet
h += to_add.to(h.dtype).to(h.device)
RuntimeError: The size of tensor a (640) must match the size of tensor b (320) at non-singleton dimension 1

@Dazmo1221
Copy link

RuntimeError: The size of tensor a (640) must match the size of tensor b (320) at non-singleton dimension 1
INFO: 127.0.0.1:50037 - "POST /gradio_api/queue/join HTTP/1.1" 200 OK
INFO: 127.0.0.1:50037 - "GET /gradio_api/queue/data?session_hash=a7gy9bg9yw HTTP/1.1" 200 OK
Apply edge controlnet
Base model type: SD1.5
BrushNet image.shape = torch.Size([1, 665, 512, 3]) mask.shape = torch.Size([1, 665, 512])
Requested to load AutoencoderKL
Loading 1 new model
loading in lowvram mode 64.0
BrushNet CL: image_latents shape = torch.Size([1, 4, 83, 64]) interpolated_mask shape = torch.Size([1, 1, 83, 64])
Requested to load ControlNet
Requested to load BaseModel
Loading 2 new models
loading in lowvram mode 64.0
loading in lowvram mode 64.0
0%| | 0/20 [00:00<?, ?it/s]BrushNet inference, step = 0: image batch = 1, got 1 latents, starting from 0
BrushNet inference: sample torch.Size([1, 4, 83, 64]) , CL torch.Size([1, 5, 83, 64]) dtype torch.float16
BrushNet can't find <class 'comfy.ops.disable_weight_init.Conv2d'> layer in 0 input block: None
0%| | 0/20 [00:02<?, ?it/s]
Traceback (most recent call last):
File "C:\Users\gamit\anaconda3\envs\MagicQuill\lib\site-packages\gradio\queueing.py", line 624, in process_events
response = await route_utils.call_process_api(
File "C:\Users\gamit\anaconda3\envs\MagicQuill\lib\site-packages\gradio\route_utils.py", line 323, in call_process_api
output = await app.get_blocks().process_api(
File "C:\Users\gamit\anaconda3\envs\MagicQuill\lib\site-packages\gradio\blocks.py", line 2018, in process_api
result = await self.call_function(
File "C:\Users\gamit\anaconda3\envs\MagicQuill\lib\site-packages\gradio\blocks.py", line 1567, in call_function
prediction = await anyio.to_thread.run_sync( # type: ignore
File "C:\Users\gamit\anaconda3\envs\MagicQuill\lib\site-packages\anyio\to_thread.py", line 56, in run_sync
return await get_async_backend().run_sync_in_worker_thread(
File "C:\Users\gamit\anaconda3\envs\MagicQuill\lib\site-packages\anyio_backends_asyncio.py", line 2505, in run_sync_in_worker_thread
return await future
File "C:\Users\gamit\anaconda3\envs\MagicQuill\lib\site-packages\anyio_backends_asyncio.py", line 1005, in run
result = context.run(func, *args)
File "C:\Users\gamit\anaconda3\envs\MagicQuill\lib\site-packages\gradio\utils.py", line 846, in wrapper
response = f(*args, **kwargs)
File "C:\MagicQuill\gradio_run.py", line 155, in generate_image_handler
res = generate(
File "C:\MagicQuill\gradio_run.py", line 123, in generate
latent_samples, final_image, lineart_output, color_output = scribbleColorEditModel.process(
File "C:\MagicQuill\MagicQuill\scribble_color_edit.py", line 110, in process
latent_samples = self.ksampler.sample(
File "C:\MagicQuill\MagicQuill\comfyui_utils.py", line 154, in sample
return self.common_ksampler(model, seed, steps, cfg, sampler_name, scheduler, positive, negative, latent_image, denoise=denoise)
File "C:\MagicQuill\MagicQuill\comfyui_utils.py", line 146, in common_ksampler
samples = comfy.sample.sample(model, noise, steps, cfg, sampler_name, scheduler, positive, negative, latent_image,
File "C:\MagicQuill\MagicQuill\comfy\sample.py", line 43, in sample
samples = sampler.sample(noise, positive, negative, cfg=cfg, latent_image=latent_image, start_step=start_step, last_step=last_step, force_full_denoise=force_full_denoise, denoise_mask=noise_mask, sigmas=sigmas, callback=callback, disable_pbar=disable_pbar, seed=seed)
File "C:\MagicQuill\MagicQuill\comfy\samplers.py", line 794, in sample
return sample(self.model, noise, positive, negative, cfg, self.device, sampler, sigmas, self.model_options, latent_image=latent_image, denoise_mask=denoise_mask, callback=callback, disable_pbar=disable_pbar, seed=seed)
File "C:\MagicQuill\MagicQuill\model_patch.py", line 120, in modified_sample
return cfg_guider.sample(noise, latent_image, sampler, sigmas, denoise_mask, callback, disable_pbar, seed)
File "C:\MagicQuill\MagicQuill\comfy\samplers.py", line 683, in sample
output = self.inner_sample(noise, latent_image, device, sampler, sigmas, denoise_mask, callback, disable_pbar, seed)
File "C:\MagicQuill\MagicQuill\comfy\samplers.py", line 662, in inner_sample
samples = sampler.sample(self, sigmas, extra_args, callback, noise, latent_image, denoise_mask, disable_pbar)
File "C:\MagicQuill\MagicQuill\comfy\samplers.py", line 567, in sample
samples = self.sampler_function(model_k, noise, sigmas, extra_args=extra_args, callback=k_callback, disable=disable_pbar, **self.extra_options)
File "C:\Users\gamit\anaconda3\envs\MagicQuill\lib\site-packages\torch\utils_contextlib.py", line 115, in decorate_context
return func(*args, **kwargs)
File "C:\MagicQuill\MagicQuill\comfy\k_diffusion\sampling.py", line 159, in sample_euler_ancestral
denoised = model(x, sigmas[i] * s_in, **extra_args)
File "C:\MagicQuill\MagicQuill\comfy\samplers.py", line 291, in call
out = self.inner_model(x, sigma, model_options=model_options, seed=seed)
File "C:\MagicQuill\MagicQuill\comfy\samplers.py", line 649, in call
return self.predict_noise(*args, **kwargs)
File "C:\MagicQuill\MagicQuill\comfy\samplers.py", line 652, in predict_noise
return sampling_function(self.inner_model, x, timestep, self.conds.get("negative", None), self.conds.get("positive", None), self.cfg, model_options=model_options, seed=seed)
File "C:\MagicQuill\MagicQuill\comfy\samplers.py", line 277, in sampling_function
out = calc_cond_batch(model, conds, x, timestep, model_options)
File "C:\MagicQuill\MagicQuill\comfy\samplers.py", line 224, in calc_cond_batch
output = model_options['model_function_wrapper'](model.apply_model, {"input": input_x, "timestep": timestep_, "c": c, "cond_or_uncond": cond_or_uncond}).chunk(batch_chunks)
File "C:\MagicQuill\MagicQuill\model_patch.py", line 52, in brushnet_model_function_wrapper
return apply_model_method(x, timestep, **options_dict['c'])
File "C:\MagicQuill\MagicQuill\comfy\model_base.py", line 113, in apply_model
model_output = self.diffusion_model(xc, t, context=context, control=control, transformer_options=transformer_options, **extra_conds).float()
File "C:\Users\gamit\anaconda3\envs\MagicQuill\lib\site-packages\torch\nn\modules\module.py", line 1518, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File "C:\Users\gamit\anaconda3\envs\MagicQuill\lib\site-packages\torch\nn\modules\module.py", line 1527, in _call_impl
return forward_call(*args, **kwargs)
File "C:\MagicQuill\MagicQuill\comfy\ldm\modules\diffusionmodules\openaimodel.py", line 852, in forward
h = forward_timestep_embed(module, h, emb, context, transformer_options, time_context=time_context, num_video_frames=num_video_frames, image_only_indicator=image_only_indicator)
File "C:\MagicQuill\MagicQuill\comfy\ldm\modules\diffusionmodules\openaimodel.py", line 44, in forward_timestep_embed
x = layer(x, context, transformer_options)
File "C:\Users\gamit\anaconda3\envs\MagicQuill\lib\site-packages\torch\nn\modules\module.py", line 1518, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File "C:\Users\gamit\anaconda3\envs\MagicQuill\lib\site-packages\torch\nn\modules\module.py", line 1527, in _call_impl
return forward_call(*args, **kwargs)
File "C:\MagicQuill\MagicQuill\brushnet_nodes.py", line 1070, in forward_patched_by_brushnet
h += to_add.to(h.dtype).to(h.device)
RuntimeError: The size of tensor a (640) must match the size of tensor b (320) at non-singleton dimension 1

@JmMndz
Copy link

JmMndz commented Dec 12, 2024

Start all over again, then before you install torch you install requirements.txt first. That is the way I managed to get rid of that error

@JmMndz
Copy link

JmMndz commented Dec 12, 2024

I am getting this other error though:
D:\AI\MagicQuill\MagicQuill\pidi.py:334: UserWarning: The torch.cuda.DtypeTensor constructors are no longer recommended. It's best to use methods such as torch.tensor(data, dtype=, device='cuda') to create tensors. (Triggered internally at ..\torch\csrc\tensor\python_tensor.cpp:85.)
buffer = torch.cuda.FloatTensor(shape[0], shape[1], 5 * 5).fill_(0)
Base model type: SD1.5
BrushNet image.shape = torch.Size([1, 512, 767, 3]) mask.shape = torch.Size([1, 512, 767])
Requested to load AutoencoderKL
Loading 1 new model
loading in lowvram mode 64.0
BrushNet CL: image_latents shape = torch.Size([1, 4, 64, 95]) interpolated_mask shape = torch.Size([1, 1, 64, 95])
Requested to load ControlNet
Requested to load BaseModel
Loading 2 new models
loading in lowvram mode 64.0
loading in lowvram mode 64.0
0%| | 0/20 [00:00<?, ?it/s]BrushNet inference, step = 0: image batch = 1, got 1 latents, starting from 0
BrushNet inference: sample torch.Size([1, 4, 64, 95]) , CL torch.Size([1, 5, 64, 95]) dtype torch.float16
0%| | 0/20 [00:03<?, ?it/s]
Traceback (most recent call last):
File "C:\Users\Jaime\AppData\Roaming\Python\Python310\site-packages\gradio\queueing.py", line 624, in process_events
response = await route_utils.call_process_api(
File "C:\Users\Jaime\AppData\Roaming\Python\Python310\site-packages\gradio\route_utils.py", line 323, in call_process_api
output = await app.get_blocks().process_api(
File "C:\Users\Jaime\AppData\Roaming\Python\Python310\site-packages\gradio\blocks.py", line 2018, in process_api
result = await self.call_function(
File "C:\Users\Jaime\AppData\Roaming\Python\Python310\site-packages\gradio\blocks.py", line 1567, in call_function
prediction = await anyio.to_thread.run_sync( # type: ignore
File "C:\Users\Jaime\AppData\Roaming\Python\Python310\site-packages\anyio\to_thread.py", line 56, in run_sync
return await get_async_backend().run_sync_in_worker_thread(
File "C:\Users\Jaime\AppData\Roaming\Python\Python310\site-packages\anyio_backends_asyncio.py", line 2505, in run_sync_in_worker_thread
return await future
File "C:\Users\Jaime\AppData\Roaming\Python\Python310\site-packages\anyio_backends_asyncio.py", line 1005, in run
result = context.run(func, *args)
File "C:\Users\Jaime\AppData\Roaming\Python\Python310\site-packages\gradio\utils.py", line 846, in wrapper
response = f(*args, **kwargs)
File "D:\AI\MagicQuill\gradio_run.py", line 155, in generate_image_handler
res = generate(
File "D:\AI\MagicQuill\gradio_run.py", line 123, in generate
latent_samples, final_image, lineart_output, color_output = scribbleColorEditModel.process(
File "D:\AI\MagicQuill\MagicQuill\scribble_color_edit.py", line 110, in process
latent_samples = self.ksampler.sample(
File "D:\AI\MagicQuill\MagicQuill\comfyui_utils.py", line 154, in sample
return self.common_ksampler(model, seed, steps, cfg, sampler_name, scheduler, positive, negative, latent_image, denoise=denoise)
File "D:\AI\MagicQuill\MagicQuill\comfyui_utils.py", line 146, in common_ksampler
samples = comfy.sample.sample(model, noise, steps, cfg, sampler_name, scheduler, positive, negative, latent_image,
File "D:\AI\MagicQuill\MagicQuill\comfy\sample.py", line 43, in sample
samples = sampler.sample(noise, positive, negative, cfg=cfg, latent_image=latent_image, start_step=start_step, last_step=last_step, force_full_denoise=force_full_denoise, denoise_mask=noise_mask, sigmas=sigmas, callback=callback, disable_pbar=disable_pbar, seed=seed)
File "D:\AI\MagicQuill\MagicQuill\comfy\samplers.py", line 794, in sample
return sample(self.model, noise, positive, negative, cfg, self.device, sampler, sigmas, self.model_options, latent_image=latent_image, denoise_mask=denoise_mask, callback=callback, disable_pbar=disable_pbar, seed=seed)
File "D:\AI\MagicQuill\MagicQuill\model_patch.py", line 120, in modified_sample
return cfg_guider.sample(noise, latent_image, sampler, sigmas, denoise_mask, callback, disable_pbar, seed)
File "D:\AI\MagicQuill\MagicQuill\comfy\samplers.py", line 683, in sample
output = self.inner_sample(noise, latent_image, device, sampler, sigmas, denoise_mask, callback, disable_pbar, seed)
File "D:\AI\MagicQuill\MagicQuill\comfy\samplers.py", line 662, in inner_sample
samples = sampler.sample(self, sigmas, extra_args, callback, noise, latent_image, denoise_mask, disable_pbar)
File "D:\AI\MagicQuill\MagicQuill\comfy\samplers.py", line 567, in sample
samples = self.sampler_function(model_k, noise, sigmas, extra_args=extra_args, callback=k_callback, disable=disable_pbar, **self.extra_options)
File "C:\Users\Jaime\AppData\Roaming\Python\Python310\site-packages\torch\utils_contextlib.py", line 115, in decorate_context
return func(*args, **kwargs)
File "D:\AI\MagicQuill\MagicQuill\comfy\k_diffusion\sampling.py", line 159, in sample_euler_ancestral
denoised = model(x, sigmas[i] * s_in, **extra_args)
File "D:\AI\MagicQuill\MagicQuill\comfy\samplers.py", line 291, in call
out = self.inner_model(x, sigma, model_options=model_options, seed=seed)
File "D:\AI\MagicQuill\MagicQuill\comfy\samplers.py", line 649, in call
return self.predict_noise(*args, **kwargs)
File "D:\AI\MagicQuill\MagicQuill\comfy\samplers.py", line 652, in predict_noise
return sampling_function(self.inner_model, x, timestep, self.conds.get("negative", None), self.conds.get("positive", None), self.cfg, model_options=model_options, seed=seed)
File "D:\AI\MagicQuill\MagicQuill\comfy\samplers.py", line 277, in sampling_function
out = calc_cond_batch(model, conds, x, timestep, model_options)
File "D:\AI\MagicQuill\MagicQuill\comfy\samplers.py", line 224, in calc_cond_batch
output = model_options['model_function_wrapper'](model.apply_model, {"input": input_x, "timestep": timestep_, "c": c, "cond_or_uncond": cond_or_uncond}).chunk(batch_chunks)
File "D:\AI\MagicQuill\MagicQuill\model_patch.py", line 50, in brushnet_model_function_wrapper
method(unet, xc, t, to, control)
File "D:\AI\MagicQuill\MagicQuill\brushnet_nodes.py", line 1022, in brushnet_forward
input_samples, mid_sample, output_samples = brushnet_inference(x, timesteps, transformer_options, debug)
File "C:\Users\Jaime\AppData\Roaming\Python\Python310\site-packages\torch\utils_contextlib.py", line 115, in decorate_context
return func(*args, **kwargs)
File "D:\AI\MagicQuill\MagicQuill\brushnet_nodes.py", line 933, in brushnet_inference
return brushnet(x,
File "C:\Users\Jaime\AppData\Roaming\Python\Python310\site-packages\torch\nn\modules\module.py", line 1518, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File "C:\Users\Jaime\AppData\Roaming\Python\Python310\site-packages\torch\nn\modules\module.py", line 1527, in _call_impl
return forward_call(*args, **kwargs)
File "D:\AI\MagicQuill\MagicQuill\brushnet\brushnet.py", line 785, in forward
emb = self.time_embedding(t_emb, timestep_cond)
File "C:\Users\Jaime\AppData\Roaming\Python\Python310\site-packages\torch\nn\modules\module.py", line 1518, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File "C:\Users\Jaime\AppData\Roaming\Python\Python310\site-packages\torch\nn\modules\module.py", line 1527, in _call_impl
return forward_call(*args, **kwargs)
File "C:\Users\Jaime\AppData\Roaming\Python\Python310\site-packages\diffusers\models\embeddings.py", line 807, in forward
sample = self.linear_1(sample)
File "C:\Users\Jaime\AppData\Roaming\Python\Python310\site-packages\torch\nn\modules\module.py", line 1518, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File "C:\Users\Jaime\AppData\Roaming\Python\Python310\site-packages\torch\nn\modules\module.py", line 1527, in call_impl
return forward_call(*args, **kwargs)
File "C:\Users\Jaime\AppData\Roaming\Python\Python310\site-packages\torch\nn\modules\linear.py", line 114, in forward
return F.linear(input, self.weight, self.bias)
RuntimeError: "addmm_impl_cpu
" not implemented for 'Half'

@RafaelS3iwa
Copy link

Start all over again, then before you install torch you install requirements.txt first. That is the way I managed to get rid of that error

mine has the same problem "The size of tensor a (640) must match the size of tensor b (320) at non-singleton dimension 1", I tried to do what you said, but it didn't work

@happydutch
Copy link

I tried all suggested above things, such as a clean install, installing requirements etc. But still get the same error.

Using the windows install batch file.

Package            Version      Editable project location
------------------ ------------ ------------------------------
accelerate         0.33.0
aiofiles           23.2.1
annotated-types    0.7.0
anyio              4.7.0
bitsandbytes       0.45.0
certifi            2022.12.7
charset-normalizer 2.1.1
click              8.1.7
colorama           0.4.6
diffusers          0.31.0
einops             0.6.1
einops-exts        0.0.4
exceptiongroup     1.2.2
fastapi            0.115.6
ffmpy              0.4.0
filelock           3.13.1
fsspec             2024.2.0
gradio             5.4.0
gradio_client      1.4.2
gradio_magicquill  0.0.1
h11                0.14.0
httpcore           0.17.3
httpx              0.24.1
huggingface-hub    0.26.5
idna               3.4
importlib_metadata 8.5.0
Jinja2             3.1.3
joblib             1.4.2
latex2mathml       3.77.0
llava              1.2.2.post1  P:\MagicQuill\MagicQuill\LLaVA
markdown-it-py     3.0.0
markdown2          2.5.2
MarkupSafe         2.1.5
mdurl              0.1.2
mpmath             1.3.0
networkx           3.2.1
numpy              1.26.3
opencv-python      4.10.0.84
orjson             3.10.12
packaging          24.2
pandas             2.2.3
peft               0.13.2
pillow             10.2.0
pip                24.2
protobuf           4.25.4
psutil             6.1.0
pydantic           2.10.3
pydantic_core      2.27.1
pydub              0.25.1
Pygments           2.18.0
python-dateutil    2.9.0.post0
python-multipart   0.0.12
pytz               2024.2
PyYAML             6.0.2
regex              2024.11.6
requests           2.28.1
rich               13.9.4
ruff               0.8.3
safehttpx          0.1.6
safetensors        0.4.5
scikit-learn       1.2.2
scipy              1.14.1
semantic-version   2.10.0
sentencepiece      0.2.0
setuptools         75.1.0
shellingham        1.5.4
shortuuid          1.0.13
six                1.17.0
sniffio            1.3.1
starlette          0.41.3
svgwrite           1.4.3
sympy              1.13.1
threadpoolctl      3.5.0
timm               0.6.13
tokenizers         0.15.1
tomlkit            0.12.0
torch              2.1.2+cu118
torchsde           0.2.6
torchvision        0.16.2+cu118
tqdm               4.67.1
trampoline         0.1.2
transformers       4.37.2
typer              0.15.1
typing_extensions  4.12.2
tzdata             2024.2
urllib3            1.26.13
uvicorn            0.32.1
wavedrom           2.0.3.post3
webcolors          1.13
websockets         12.0
wheel              0.44.0
zipp               3.21.0

@zliucz
Copy link
Collaborator

zliucz commented Dec 15, 2024

Hi, everyone. It seems that this is issue related to the ComfyUI-BrushNet. Please check issues#136 and issues#152. According to their responses, change a few lines at MagicQuill/brushnet_nodes.py and MagicQuill/comfy/cli_args.py may solve the problem.

Change the following lines atMagicQuill/brushnet_nodes.py :
image

Also change the 62th line of MagicQuill/comfy/cli_args.py to:

fpunet_group.add_argument("--fp8_e4m3fn-unet", type=bool, default=True, help="Store unet weights in fp8_e4m3fn.")

@happydutch
Copy link

It worked! Thank you!

@zliucz zliucz pinned this issue Dec 17, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

10 participants