-
Notifications
You must be signed in to change notification settings - Fork 25
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
RuntimeError: The size of tensor a (640) must match the size of tensor b (320) at non-singleton dimension 1 #136
Comments
Could you please post full output from ComfyUI? |
I'm having the same issue.
|
I don't know what changed, but it works for me now |
I can't reproduce the error as well. May be some commits of ComfyUI are the reason. |
I figured it out, I had my ComfyUI launcher script running the argument "--fp8_e4m3fn-unet" for flux. |
I don't know what happen at here, but my friend sometime had meet the problem, he can fix the problem when he change checkpoint model. but i can't use the method to deal ,so i don't know what happen ,thank you |
What checkpoint do you use? It should be |
I have this exact same error! as long as I removed --fast command in launch argument, this error is gone.... But i wish I can have both.. --fast is incredibly power speed up 40 series flux generation by 40% |
After upgrading to the latest version of Comfyui on October 11th, there is an error message. Returning to the version on October 9th is normal. How to solve this problem? |
I also encountered |
I upgraded CUDA to version 12.4, and now using the latest version of Comfyui is working properly. Everyone can give it a try. |
This issue after upgrade Comyui(comfyanonymous/ComfyUI@e38c942#diff-83920b72a497ff05a33ecf5ac3d19df7911f228f9921fa21e7b64c3b24781fafR101). After that, loading the Flux model no longer requires adding the |
This seems to solve the issue but the "--fast" still works faster than the weight dtype "fp8_e4m3fn_fast" in the Load Diffusion Model node. Maybe @nullquant can work something out so we can have his argument --fast and BrushNet at the same time. Appreciate that! Thx for the hardwork |
"""
""" import torch cast_to = comfy.model_management.cast_to #TODO: remove once no more references def cast_to_input(weight, input, non_blocking=False, copy=True): def cast_bias_weight(s, input=None, dtype=None, device=None, bias_dtype=None):
class CastWeightBiasOp: class disable_weight_init:
class manual_cast(disable_weight_init):
def fp8_linear(self, input):
class fp8_ops(manual_cast):
def scaled_fp8_ops(fp8_matrix_mult=False, scale_input=False, override_dtype=None):
def pick_operations(weight_dtype, compute_dtype, load_device=None, disable_fast_fp8=False, fp8_optimizations=False, scaled_fp8=None):
|
@Orenji-Tangerine modify two lines of code in brushnet_nodes.py as shown below. |
hello, when i run until KSampler, there is a error, do you know how to deal with it? thank you!
The text was updated successfully, but these errors were encountered: