-
-
Notifications
You must be signed in to change notification settings - Fork 41
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Allocation on device 0 would exceed allowed memory. (out of memory) #57
Comments
I get the same error. And, even though Generation set it low, it does not proceed to the next one due to a memory error. |
I got the same issue while I set cuda_device=2 launching ComfyUI, but got the same error about that with "device 0". |
Having the same issue. Not sure if anyone of you resolved this? |
same issue with batch in 1 and 4070 12gb VRAM dedicated to comfy in linux |
Hi People, if I want to use PhotoMaker i get this error:
Error occurred when executing NEW_PhotoMaker_Generation:
Allocation on device 0 would exceed allowed memory. (out of memory)
Currently allocated : 9.14 GiB
Requested : 1024.00 MiB
Device limit : 11.99 GiB
Free (according to CUDA): 0 bytes
PyTorch limit (set by user-supplied memory fraction)
: 17179869184.00 GiB
File "D:\ComfyUI\ComfyUI_windows_portable\ComfyUI\execution.py", line 152, in recursive_execute
output_data, output_ui = get_output_data(obj, input_data_all)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "D:\ComfyUI\ComfyUI_windows_portable\ComfyUI\execution.py", line 82, in get_output_data
return_values = map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "D:\ComfyUI\ComfyUI_windows_portable\ComfyUI\execution.py", line 75, in map_node_over_list
results.append(getattr(obj, func)(**slice_dict(input_data_all, i)))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "D:\ComfyUI\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI-PhotoMaker-ZHO\PhotoMakerNode.py", line 396, in generate_image
output = pipe(
^^^^^
File "D:\ComfyUI\ComfyUI_windows_portable\python_embeded\Lib\site-packages\torch\utils_contextlib.py", line 115, in decorate_context
return func(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^
File "D:\ComfyUI\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI-PhotoMaker-ZHO\pipeline.py", line 475, in call
image = self.vae.decode(latents / self.vae.config.scaling_factor, return_dict=False)[0]
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "D:\ComfyUI\ComfyUI_windows_portable\python_embeded\Lib\site-packages\diffusers\utils\accelerate_utils.py", line 46, in wrapper
return method(self, *args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "D:\ComfyUI\ComfyUI_windows_portable\python_embeded\Lib\site-packages\diffusers\models\autoencoders\autoencoder_kl.py", line 304, in decode
decoded = self._decode(z).sample
^^^^^^^^^^^^^^^
File "D:\ComfyUI\ComfyUI_windows_portable\python_embeded\Lib\site-packages\diffusers\models\autoencoders\autoencoder_kl.py", line 275, in _decode
dec = self.decoder(z)
^^^^^^^^^^^^^^^
File "D:\ComfyUI\ComfyUI_windows_portable\python_embeded\Lib\site-packages\torch\nn\modules\module.py", line 1511, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "D:\ComfyUI\ComfyUI_windows_portable\python_embeded\Lib\site-packages\torch\nn\modules\module.py", line 1520, in _call_impl
return forward_call(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "D:\ComfyUI\ComfyUI_windows_portable\python_embeded\Lib\site-packages\diffusers\models\autoencoders\vae.py", line 338, in forward
sample = up_block(sample, latent_embeds)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "D:\ComfyUI\ComfyUI_windows_portable\python_embeded\Lib\site-packages\torch\nn\modules\module.py", line 1511, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "D:\ComfyUI\ComfyUI_windows_portable\python_embeded\Lib\site-packages\torch\nn\modules\module.py", line 1520, in _call_impl
return forward_call(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "D:\ComfyUI\ComfyUI_windows_portable\python_embeded\Lib\site-packages\diffusers\models\unet_2d_blocks.py", line 2535, in forward
hidden_states = upsampler(hidden_states)
^^^^^^^^^^^^^^^^^^^^^^^^
File "D:\ComfyUI\ComfyUI_windows_portable\python_embeded\Lib\site-packages\torch\nn\modules\module.py", line 1511, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "D:\ComfyUI\ComfyUI_windows_portable\python_embeded\Lib\site-packages\torch\nn\modules\module.py", line 1520, in _call_impl
return forward_call(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "D:\ComfyUI\ComfyUI_windows_portable\python_embeded\Lib\site-packages\diffusers\models\upsampling.py", line 186, in forward
hidden_states = self.conv(hidden_states)
^^^^^^^^^^^^^^^^^^^^^^^^
File "D:\ComfyUI\ComfyUI_windows_portable\python_embeded\Lib\site-packages\torch\nn\modules\module.py", line 1511, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "D:\ComfyUI\ComfyUI_windows_portable\python_embeded\Lib\site-packages\torch\nn\modules\module.py", line 1520, in _call_impl
return forward_call(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "D:\ComfyUI\ComfyUI_windows_portable\python_embeded\Lib\site-packages\torch\nn\modules\conv.py", line 460, in forward
return self._conv_forward(input, self.weight, self.bias)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "D:\ComfyUI\ComfyUI_windows_portable\python_embeded\Lib\site-packages\torch\nn\modules\conv.py", line 456, in _conv_forward
return F.conv2d(input, weight, bias, self.stride,
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Close
Do you have any Ideas to fix it?
My graphic card is a nvidia gtx 4070 ti
I've tried to reinstall the drivers, but nothing happend.
Thanks & Cheers
Sascha
The text was updated successfully, but these errors were encountered: