Skip to content

pls help ive been struggling for days error after error getting ran out of memory error #12713

@Ivanoplay9

Description

@Ivanoplay9

Custom Node Testing

Your question

i have rx580 8gb , running torch 2.3.0+cu18 been trying to run for awhile im considering going back to directml since it could generate sdxl images in 1000x800 in 2mins also tried sd1.5 on 500x500 still giving cuda memory error thought zluda would be much better i cant even get it to work, i first got cublas_64_11.dll error then i got RuntimeError: CUDA error: CUBLAS_STATUS_NOT_SUPPORTED when calling cublasSgemm( handle, opa, opb, m, n, k, &alpha, a, lda, b, ldb, &beta, c, ldc) error now im getting out of memory error, I tried different args like reserve 0 memory, force 16fp, force 32fp, disabled all custom nodes, lowvram, still not working

Logs

HIP_PATH = C:\Program Files\AMD\ROCm\5.7\
HIP_PATH_57 = C:\Program Files\AMD\ROCm\5.7\

[INFO] Detected Git version: 2.53.0.windows.1
[INFO] ComfyUI-Zluda current path: C:\ComfyUI-Zluda\
[INFO] ComfyUI-Zluda current version: 2026-03-01 01:46:00 hash: 91803466 branch: pre24patched
[INFO] Checking and updating to a new version if possible...
[INFO] Already up to date.

[INFO] AMD Software version: 23.7.2
[INFO] ZLUDA version: 3.9.5 [release build]
[INFO] Launching application via ZLUDA...

C:\ComfyUI-Zluda\venv\Lib\site-packages\requests\__init__.py:113: RequestsDependencyWarning: urllib3 (2.6.3) or chardet (6.0.0.post1)/charset_normalizer (3.4.4) doesn't match a supported version!
  warnings.warn(
[START] Security scan
[DONE] Security scan
## ComfyUI-Manager: installing dependencies done.
** ComfyUI startup time: 2026-03-01 11:29:21.215
** Platform: Windows
** Python version: 3.12.10 (tags/v3.12.10:0cc8128, Apr  8 2025, 12:21:36) [MSC v.1943 64 bit (AMD64)]
** Python executable: C:\ComfyUI-Zluda\venv\Scripts\python.exe
** ComfyUI Path: C:\ComfyUI-Zluda
** ComfyUI Base Folder Path: C:\ComfyUI-Zluda
** User directory: C:\ComfyUI-Zluda\user
** ComfyUI-Manager config path: C:\ComfyUI-Zluda\user\__manager\config.ini
** Log path: C:\ComfyUI-Zluda\user\comfyui.log

Prestartup times for custom nodes:
   6.4 seconds: C:\ComfyUI-Zluda\custom_nodes\ComfyUI-Manager

Warning, you are using an old pytorch version and some ckpt/pt files might be loaded unsafely. Upgrading to 2.4 or above is recommended as older versions of pytorch are no longer supported.
Failed to import comfy_kitchen, Error: No module named 'comfy_kitchen', fp8 and fp4 support will not be available.

  ::  Checking package versions...
Found pydantic: 2.12.5, pydantic-settings: 2.13.1
  ::  Pydantic packages are compatible, skipping reinstall
Installed version of comfyui-frontend-package: 1.39.19
Installed version of comfyui-workflow-templates: 0.9.4
Installed version of av: 16.1.0
Installed version of comfyui-embedded-docs: 0.4.3
Installed version of comfy-aimdo: 0.2.2
  ::  Package version check complete.
  ::  PyTorch RMSNorm not found, adding ComfyUI-compatible layer.
  ::  ComfyUI-compatible RMSNorm layer installed.

***----------------------ZLUDA-----------------------------***
  ::  Patching ONNX Runtime for ZLUDA — disabling CUDA EP.
  ::  ZLUDA detected, disabling non-supported functions.
  ::  CuDNN, flash_sdp, mem_efficient_sdp disabled.
  ::  Using ZLUDA with device: Radeon RX 580 Series [ZLUDA]
***--------------------------------------------------------***

Total VRAM 8192 MB, total RAM 16303 MB
pytorch version: 2.3.0+cu118
Set vram state to: LOW_VRAM
Device: cuda:0 Radeon RX 580 Series [ZLUDA] : native
Using async weight offloading with 2 streams
Please update pytorch to use native RMSNorm
Torch version too old to set sdpa backend priority.
Using split optimization for attention
No comfy kitchen, using old apply_rope functions.
Unsupported Pytorch detected. DynamicVRAM support requires Pytorch version 2.8 or later. Falling back to legacy ModelPatcher. VRAM estimates may be unreliable especially on Windows
Python version: 3.12.10 (tags/v3.12.10:0cc8128, Apr  8 2025, 12:21:36) [MSC v.1943 64 bit (AMD64)]
ComfyUI version: 0.15.1
ComfyUI frontend version: 1.39.19
[Prompt Server] web root: C:\ComfyUI-Zluda\venv\Lib\site-packages\comfyui_frontend_package\static
### Loading: ComfyUI-Manager (V3.39.2)
[ComfyUI-Manager] network_mode: public
[ComfyUI-Manager] ComfyUI per-queue preview override detected (PR #11261). Manager's preview method feature is disabled. Use ComfyUI's --preview-method CLI option or 'Settings > Execution > Live preview method'.
### ComfyUI Revision: 6415 on 'pre24patched' [91803466] | Released on '2026-03-01'

Import times for custom nodes:
   0.0 seconds: C:\ComfyUI-Zluda\custom_nodes\cfz_vae_loader.py
   0.0 seconds: C:\ComfyUI-Zluda\custom_nodes\cfz_cudnn.toggle.py
   0.0 seconds: C:\ComfyUI-Zluda\custom_nodes\CFZ-SwitchMenu
   0.0 seconds: C:\ComfyUI-Zluda\custom_nodes\cfz_patcher.py
   0.0 seconds: C:\ComfyUI-Zluda\custom_nodes\websocket_image_save.py
   0.0 seconds: C:\ComfyUI-Zluda\custom_nodes\CFZ-caching
   1.3 seconds: C:\ComfyUI-Zluda\custom_nodes\ComfyUI-Manager

Context impl SQLiteImpl.
Will assume non-transactional DDL.
Assets scan(roots=['models']) completed in 0.053s (created=0, skipped_existing=18, orphans_pruned=0, total_seen=18)
Starting server

To see the GUI go to: http://127.0.0.1:8188
[ComfyUI-Manager] default cache updated: https://raw.githubusercontent.com/ltdrdata/ComfyUI-Manager/main/model-list.json
[ComfyUI-Manager] default cache updated: https://raw.githubusercontent.com/ltdrdata/ComfyUI-Manager/main/alter-list.json
[ComfyUI-Manager] default cache updated: https://raw.githubusercontent.com/ltdrdata/ComfyUI-Manager/main/github-stats.json
[ComfyUI-Manager] default cache updated: https://raw.githubusercontent.com/ltdrdata/ComfyUI-Manager/main/custom-node-list.json
[ComfyUI-Manager] default cache updated: https://raw.githubusercontent.com/ltdrdata/ComfyUI-Manager/main/extension-node-map.json
FETCH ComfyRegistry Data: 5/128
[DEPRECATION WARNING] Detected import of deprecated legacy API: /scripts/ui.js. This is likely caused by a custom node extension using outdated APIs. Please update your extensions or contact the extension author for an updated version.
[DEPRECATION WARNING] Detected import of deprecated legacy API: /extensions/core/groupNode.js. This is likely caused by a custom node extension using outdated APIs. Please update your extensions or contact the extension author for an updated version.
[CFZ Load] No cache files found
[CFZ Load] No cache files found
[DEPRECATION WARNING] Detected import of deprecated legacy API: /scripts/ui/components/buttonGroup.js. This is likely caused by a custom node extension using outdated APIs. Please update your extensions or contact the extension author for an updated version.
[DEPRECATION WARNING] Detected import of deprecated legacy API: /scripts/ui/components/button.js. This is likely caused by a custom node extension using outdated APIs. Please update your extensions or contact the extension author for an updated version.
FETCH ComfyRegistry Data: 10/128
FETCH ComfyRegistry Data: 15/128
FETCH ComfyRegistry Data: 20/128
FETCH ComfyRegistry Data: 25/128
FETCH ComfyRegistry Data: 30/128
FETCH ComfyRegistry Data: 35/128
FETCH ComfyRegistry Data: 40/128
FETCH ComfyRegistry Data: 45/128
FETCH ComfyRegistry Data: 50/128
FETCH ComfyRegistry Data: 55/128
FETCH ComfyRegistry Data: 60/128
FETCH ComfyRegistry Data: 65/128
FETCH ComfyRegistry Data: 70/128
FETCH ComfyRegistry Data: 75/128
FETCH ComfyRegistry Data: 80/128
FETCH ComfyRegistry Data: 85/128
FETCH ComfyRegistry Data: 90/128
FETCH ComfyRegistry Data: 95/128
FETCH ComfyRegistry Data: 100/128
FETCH ComfyRegistry Data: 105/128
FETCH ComfyRegistry Data: 110/128
FETCH ComfyRegistry Data: 115/128
FETCH ComfyRegistry Data: 120/128
FETCH ComfyRegistry Data: 125/128
FETCH ComfyRegistry Data [DONE]
[ComfyUI-Manager] default cache updated: https://api.comfy.org/nodes
FETCH DATA from: https://raw.githubusercontent.com/ltdrdata/ComfyUI-Manager/main/custom-node-list.json [DONE]
[ComfyUI-Manager] All startup tasks have been completed.
got prompt
Using split attention in VAE
Using split attention in VAE
VAE load device: cpu, offload device: cpu, dtype: torch.float32
model weight dtype torch.float16, manual cast: None
model_type EPS
Using split attention in VAE
Using split attention in VAE
VAE load device: cpu, offload device: cpu, dtype: torch.float32
Requested to load SDXLClipModel
loaded completely;  1560.80 MB loaded, full load: True
CLIP/text encoder model load device: cpu, offload device: cpu, current: cpu, dtype: torch.float16
Requested to load SDXL
loaded completely; 7246.80 MB usable, 4897.05 MB loaded, full load: True
!!! Exception during processing !!! CUDA error: out of memory
CUDA kernel errors might be asynchronously reported at some other API call, so the stacktrace below might be incorrect.
For debugging consider passing CUDA_LAUNCH_BLOCKING=1.
Compile with `TORCH_USE_CUDA_DSA` to enable device-side assertions.

Traceback (most recent call last):
  File "C:\ComfyUI-Zluda\execution.py", line 524, in execute
    output_data, output_ui, has_subgraph, has_pending_tasks = await get_output_data(prompt_id, unique_id, obj, input_data_all, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb, v3_data=v3_data)
                                                              ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "C:\ComfyUI-Zluda\execution.py", line 333, in get_output_data
    return_values = await _async_map_node_over_list(prompt_id, unique_id, obj, input_data_all, obj.FUNCTION, allow_interrupt=True, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb, v3_data=v3_data)
                    ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "C:\ComfyUI-Zluda\execution.py", line 307, in _async_map_node_over_list
    await process_inputs(input_dict, i)
  File "C:\ComfyUI-Zluda\execution.py", line 295, in process_inputs
    result = f(**inputs)
             ^^^^^^^^^^^
  File "C:\ComfyUI-Zluda\nodes.py", line 1593, in sample
    return common_ksampler(model, seed, steps, cfg, sampler_name, scheduler, positive, negative, latent_image, denoise=denoise)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "C:\ComfyUI-Zluda\nodes.py", line 1558, in common_ksampler
    samples = comfy.sample.sample(model, noise, steps, cfg, sampler_name, scheduler, positive, negative, latent_image,
              ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "C:\ComfyUI-Zluda\comfy\sample.py", line 66, in sample
    samples = sampler.sample(noise, positive, negative, cfg=cfg, latent_image=latent_image, start_step=start_step, last_step=last_step, force_full_denoise=force_full_denoise, denoise_mask=noise_mask, sigmas=sigmas, callback=callback, disable_pbar=disable_pbar, seed=seed)
              ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "C:\ComfyUI-Zluda\comfy\samplers.py", line 1179, in sample
    return sample(self.model, noise, positive, negative, cfg, self.device, sampler, sigmas, self.model_options, latent_image=latent_image, denoise_mask=denoise_mask, callback=callback, disable_pbar=disable_pbar, seed=seed)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "C:\ComfyUI-Zluda\comfy\samplers.py", line 1069, in sample
    return cfg_guider.sample(noise, latent_image, sampler, sigmas, denoise_mask, callback, disable_pbar, seed)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "C:\ComfyUI-Zluda\comfy\samplers.py", line 1051, in sample
    output = executor.execute(noise, latent_image, sampler, sigmas, denoise_mask, callback, disable_pbar, seed, latent_shapes=latent_shapes)
             ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "C:\ComfyUI-Zluda\comfy\patcher_extension.py", line 112, in execute
    return self.original(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "C:\ComfyUI-Zluda\comfy\samplers.py", line 995, in outer_sample
    output = self.inner_sample(noise, latent_image, device, sampler, sigmas, denoise_mask, callback, disable_pbar, seed, latent_shapes=latent_shapes)
             ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "C:\ComfyUI-Zluda\comfy\samplers.py", line 967, in inner_sample
    if latent_image is not None and torch.count_nonzero(latent_image) > 0: #Don't shift the empty latent image.
                                    ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
RuntimeError: CUDA error: out of memory
CUDA kernel errors might be asynchronously reported at some other API call, so the stacktrace below might be incorrect.
For debugging consider passing CUDA_LAUNCH_BLOCKING=1.
Compile with `TORCH_USE_CUDA_DSA` to enable device-side assertions.

Other

No response

Metadata

Metadata

Assignees

No one assigned

    Labels

    User SupportA user needs help with something, probably not a bug.

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions