Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Out of memory error #31

Open
AgeOfAlgorithms opened this issue Mar 26, 2024 · 5 comments
Open

Out of memory error #31

AgeOfAlgorithms opened this issue Mar 26, 2024 · 5 comments

Comments

@AgeOfAlgorithms
Copy link

AgeOfAlgorithms commented Mar 26, 2024

Hello! I'm a big fan of this repo. I'm using a rx 6700xt GPU with 12GB vram with rocm on Ubuntu. I get an out of memory error.

Error occurred when executing OOTDGenerate:

HIP out of memory. Tried to allocate 4.50 GiB. GPU 0 has a total capacty of 11.98 GiB of which 1.99 GiB is free. Of the allocated memory 9.59 GiB is allocated by PyTorch, and 47.22 MiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting max_split_size_mb to avoid fragmentation.  See documentation for Memory Management and PYTORCH_HIP_ALLOC_CONF

  File "/home/sean/ComfyUI/execution.py", line 151, in recursive_execute
    output_data, output_ui = get_output_data(obj, input_data_all)
                             ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/sean/ComfyUI/execution.py", line 81, in get_output_data
    return_values = map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True)
                    ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/sean/ComfyUI/execution.py", line 74, in map_node_over_list
    results.append(getattr(obj, func)(**slice_dict(input_data_all, i)))
                   ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/sean/ComfyUI/custom_nodes/ComfyUI-OOTDiffusion/__init__.py", line 170, in generate
    images = pipe(
             ^^^^^
  File "/home/sean/ComfyUI/custom_nodes/ComfyUI-OOTDiffusion/inference_ootd.py", line 152, in __call__
    images = self.pipe(
             ^^^^^^^^^^
  File "/home/sean/anaconda3/envs/ootd/lib/python3.11/site-packages/torch/utils/_contextlib.py", line 115, in decorate_context
    return func(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^
  File "/home/sean/ComfyUI/custom_nodes/ComfyUI-OOTDiffusion/pipelines_ootd/pipeline_ootd.py", line 354, in __call__
    _, spatial_attn_outputs = self.unet_garm(
                              ^^^^^^^^^^^^^^^
  File "/home/sean/anaconda3/envs/ootd/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1519, in _wrapped_call_impl
    return self._call_impl(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/sean/anaconda3/envs/ootd/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1528, in _call_impl
    return forward_call(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/sean/ComfyUI/custom_nodes/ComfyUI-OOTDiffusion/pipelines_ootd/unet_garm_2d_condition.py", line 1079, in forward
    sample, res_samples, spatial_attn_inputs = downsample_block(
                                               ^^^^^^^^^^^^^^^^^
  File "/home/sean/anaconda3/envs/ootd/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1519, in _wrapped_call_impl
    return self._call_impl(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/sean/anaconda3/envs/ootd/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1528, in _call_impl
    return forward_call(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/sean/ComfyUI/custom_nodes/ComfyUI-OOTDiffusion/pipelines_ootd/unet_garm_2d_blocks.py", line 1172, in forward
    hidden_states, spatial_attn_inputs = attn(
                                         ^^^^^
  File "/home/sean/anaconda3/envs/ootd/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1519, in _wrapped_call_impl
    return self._call_impl(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/sean/anaconda3/envs/ootd/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1528, in _call_impl
    return forward_call(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/sean/ComfyUI/custom_nodes/ComfyUI-OOTDiffusion/pipelines_ootd/transformer_garm_2d.py", line 381, in forward
    hidden_states, spatial_attn_inputs = block(
                                         ^^^^^^
  File "/home/sean/anaconda3/envs/ootd/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1519, in _wrapped_call_impl
    return self._call_impl(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/sean/anaconda3/envs/ootd/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1528, in _call_impl
    return forward_call(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/sean/ComfyUI/custom_nodes/ComfyUI-OOTDiffusion/pipelines_ootd/attention_garm.py", line 264, in forward
    attn_output = self.attn1(
                  ^^^^^^^^^^^
  File "/home/sean/anaconda3/envs/ootd/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1519, in _wrapped_call_impl
    return self._call_impl(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/sean/anaconda3/envs/ootd/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1528, in _call_impl
    return forward_call(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/sean/anaconda3/envs/ootd/lib/python3.11/site-packages/diffusers/models/attention_processor.py", line 522, in forward
    return self.processor(
           ^^^^^^^^^^^^^^^
  File "/home/sean/anaconda3/envs/ootd/lib/python3.11/site-packages/diffusers/models/attention_processor.py", line 1231, in __call__
    hidden_states = F.scaled_dot_product_attention(
                    ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

Is this node impossible to run on my GPU?

@iyume
Copy link
Contributor

iyume commented Mar 26, 2024

Yes. I used to handle this on 12GB GPU.

I made a new branch here, can you try it?

git fetch origin 12gb
git switch 12gb

I am not planning to merge this to main branch because it is slower about three seconds.

@AgeOfAlgorithms
Copy link
Author

Yes. I used to handle this on 12GB GPU.

I made a new branch here, can you try it?

git fetch origin 12gb
git switch 12gb

I am not planning to merge this to main branch because it is slower about three seconds.

I switched to the new branch and tried again, and I get the same error

@AgeOfAlgorithms
Copy link
Author

sorry, I misclicked on close issue button

@AgeOfAlgorithms
Copy link
Author

AgeOfAlgorithms commented Mar 28, 2024

@iyume Here are some additional info from my system:
ROCm version: 5.6.1.50601
Ubuntu version: 22.04.4 LTS
Python version: 3.11.8
Pytorch version: 2.2.0.dev20231010+rocm5.6

I've seen other people using ROCm get this same error on different nodes and workflows,
like this reddit post for example: https://www.reddit.com/r/comfyui/comments/1bo3940/img2vid_lcm_help_why_does_it_allocate_30gb/

@yurayko
Copy link

yurayko commented May 6, 2024

12gb branch not worked on RX6800 with 16gb too.
HIP out of memory. Tried to allocate 18.00 GiB. GPU
rocm 6.1

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants