-
Notifications
You must be signed in to change notification settings - Fork 11.9k
Closed
Labels
Potential BugUser is reporting a bug. This should be tested.User is reporting a bug. This should be tested.
Description
Custom Node Testing
- I have tried disabling custom nodes and the issue persists (see how to disable custom nodes if you need help)
Expected Behavior
worky worky
Actual Behavior
HIP out of memory. Tried to allocate 333.38 GiB. GPU 0 has a total capacity of 15.92 GiB of which 7.28 GiB is free. Of the allocated memory 8.22 GiB is allocated by PyTorch, and 19.88 MiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True to avoid fragmentation. See documentation for Memory Management (https://pytorch.org/docs/stable/notes/cuda.html#environment-variables)
This error means you ran out of memory on your GPU.
Steps to Reproduce
Install amd preview driver 25.12.1
Debug Logs
C:\ComfyUI_windows_portable>.\python_embeded\python.exe -s ComfyUI\main.py --windows-standalone-build --disable-all-custom-nodes
[WARNING] failed to run amdgpu-arch: binary not found.
Checkpoint files will always be loaded safely.
Total VRAM 16304 MB, total RAM 49063 MB
pytorch version: 2.9.0+rocmsdk20251116
Set: torch.backends.cudnn.enabled = False for better AMD performance.
AMD arch: gfx1201
ROCm version: (7, 1)
Set vram state to: NORMAL_VRAM
Device: cuda:0 AMD Radeon RX 9070 XT : native
Enabled pinned memory 22078.0
Using sub quadratic optimization for attention, if you have memory or speed issues try using: --use-split-cross-attention
Python version: 3.12.10 (tags/v3.12.10:0cc8128, Apr 8 2025, 12:21:36) [MSC v.1943 64 bit (AMD64)]
ComfyUI version: 0.4.0
________________________________________________________________________
WARNING WARNING WARNING WARNING WARNING
Installed frontend version 1.33.13 is lower than the recommended version 1.34.8.
Please install the updated requirements.txt file by running:
C:\ComfyUI_windows_portable\python_embeded\python.exe -s -m pip install -r C:\ComfyUI_windows_portable\ComfyUI\requirements.txt
If you are on the portable package you can run: update\update_comfyui.bat to solve this problem.
This error is happening because the ComfyUI frontend is no longer shipped as part of the main repo but as a pip package instead.
________________________________________________________________________
[Prompt Server] web root: C:\ComfyUI_windows_portable\python_embeded\Lib\site-packages\comfyui_frontend_package\static
Total VRAM 16304 MB, total RAM 49063 MB
pytorch version: 2.9.0+rocmsdk20251116
Set: torch.backends.cudnn.enabled = False for better AMD performance.
AMD arch: gfx1201
ROCm version: (7, 1)
Set vram state to: NORMAL_VRAM
Device: cuda:0 AMD Radeon RX 9070 XT : native
Enabled pinned memory 22078.0
Skipping loading of custom nodes
Context impl SQLiteImpl.
Will assume non-transactional DDL.
No target revision found.
________________________________________________________________________
WARNING WARNING WARNING WARNING WARNING
Installed frontend version 1.33.13 is lower than the recommended version 1.34.8.
Please install the updated requirements.txt file by running:
C:\ComfyUI_windows_portable\python_embeded\python.exe -s -m pip install -r C:\ComfyUI_windows_portable\ComfyUI\requirements.txt
If you are on the portable package you can run: update\update_comfyui.bat to solve this problem.
This error is happening because the ComfyUI frontend is no longer shipped as part of the main repo but as a pip package instead.
________________________________________________________________________
Starting server
To see the GUI go to: http://127.0.0.1:8188
got prompt
invalid prompt: {'type': 'invalid_prompt', 'message': 'Cannot execute because a node is missing the class_type property.', 'details': "Node ID '#28'", 'extra_info': {}}
got prompt
model weight dtype torch.bfloat16, manual cast: None
model_type FLOW
Using split attention in VAE
Using split attention in VAE
VAE load device: cuda:0, offload device: cpu, dtype: torch.bfloat16
Requested to load LuminaTEModel_
loaded completely; 95367431640625005117571072.00 MB usable, 4986.46 MB loaded, full load: True
CLIP/text encoder model load device: cuda:0, offload device: cpu, current: cuda:0, dtype: torch.float16
Requested to load Lumina2
Unloaded partially: 2160.46 MB freed, 2826.00 MB remains loaded, 121.50 MB buffer reserved, lowvram patches: 0
loaded completely; 9045.30 MB usable, 4977.74 MB loaded, full load: True
100%|██████████████████████████████████████████████████████████████████████████████████| 30/30 [00:03<00:00, 8.10it/s]
Requested to load AutoencodingEngine
Unloaded partially: 1940.24 MB freed, 3037.50 MB remains loaded, 40.50 MB buffer reserved, lowvram patches: 0
loaded completely; 5120.57 MB usable, 159.87 MB loaded, full load: True
Prompt executed in 12.93 secondsOther
No response
Reactions are currently unavailable
Metadata
Metadata
Assignees
Labels
Potential BugUser is reporting a bug. This should be tested.User is reporting a bug. This should be tested.