Comfy ui portable AMD is basically still broken and unaddressed since update. #11469
Janembruh68
started this conversation in
General
Replies: 0 comments
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Uh oh!
There was an error while loading. Please reload this page.
Uh oh!
There was an error while loading. Please reload this page.
-
Comfy Portable still throwing arch.exe not found error and not reporting gpu usage correct as well as incorrectly allocating memory leading to prompts not running because physical memory fills up but system reports near limitless memory leading to no memory management and lack of OOM error. Lots of reports of similar issues and memory problems left unaddressed for AMD and Nvidia users with a few users even offering patches for the underlying memory issues.
Here is the Arch.exe not found error
[WARNING] failed to run amdgpu-arch: binary not found.
Manually unloading models and cache is a temporary workaround ( Setting normalvram stopped it from overrunning for me as well)
Here is the sus line from my log
loaded completely; 95367431640625005117571072.00 MB usable, 7672.25 MB loaded, full load: True
If you are getting same error or having memory problems like slow prompts or failing/non responsive prompts please paste your logs in the comments for visibility. Maybe we can get a response or an acknowledgment.
D:\ComfyUI_windows_portable_amd\ComfyUI_windows_portable>.\python_embeded\python.exe -s ComfyUI\main.py --windows-standalone-build --highvram
[WARNING] failed to run amdgpu-arch: binary not found.
Checkpoint files will always be loaded safely.
Total VRAM 16368 MB, total RAM 65462 MB
pytorch version: 2.9.0+rocmsdk20251116
Set: torch.backends.cudnn.enabled = False for better AMD performance.
AMD arch: gfx1101
ROCm version: (7, 1)
Set vram state to: HIGH_VRAM
Device: cuda:0 AMD Radeon RX 7800 XT : native
Enabled pinned memory 29457.0
Using sub quadratic optimization for attention, if you have memory or speed issues try using: --use-split-cross-attention
Python version: 3.12.10 (tags/v3.12.10:0cc8128, Apr 8 2025, 12:21:36) [MSC v.1943 64 bit (AMD64)]
ComfyUI version: 0.4.0
ComfyUI frontend version: 1.34.8
[Prompt Server] web root: D:\ComfyUI_windows_portable_amd\ComfyUI_windows_portable\python_embeded\Lib\site-packages\comfyui_frontend_package\static
Total VRAM 16368 MB, total RAM 65462 MB
pytorch version: 2.9.0+rocmsdk20251116
Set: torch.backends.cudnn.enabled = False for better AMD performance.
AMD arch: gfx1101
ROCm version: (7, 1)
Set vram state to: HIGH_VRAM
Device: cuda:0 AMD Radeon RX 7800 XT : native
Enabled pinned memory 29457.0
Import times for custom nodes:
0.0 seconds: D:\ComfyUI_windows_portable_amd\ComfyUI_windows_portable\ComfyUI\custom_nodes\websocket_image_save.py
Context impl SQLiteImpl.
Will assume non-transactional DDL.
No target revision found.
Starting server
To see the GUI go to: http://127.0.0.1:8188/
got prompt
Using split attention in VAE
Using split attention in VAE
VAE load device: cuda:0, offload device: cpu, dtype: torch.bfloat16
CLIP/text encoder model load device: cuda:0, offload device: cpu, current: cpu, dtype: torch.float16
Requested to load ZImageTEModel_
loaded completely; 95367431640625005117571072.00 MB usable, 7672.25 MB loaded, full load: True
model weight dtype torch.bfloat16, manual cast: None
model_type FLOW
unet missing: ['norm_final.weight']
Requested to load Lumina2
loaded completely; 95367431640625005117571072.00 MB usable, 11739.55 MB loaded, full load: True
100%|████████████████████████████████████████████████████████████████████████████████████| 9/9 [00:10<00:00, 1.12s/it]
Requested to load AutoencodingEngine
loaded completely; 95367431640625005117571072.00 MB usable, 159.87 MB loaded, full load: True
Prompt executed in 27.44 seconds
got prompt
100%|████████████████████████████████████████████████████████████████████████████████████| 9/9 [00:03<00:00, 2.42it/s]
Prompt executed in 4.55 seconds
got prompt
100%|████████████████████████████████████████████████████████████████████████████████████| 9/9 [00:03<00:00, 2.37it/s]
Prompt executed in 4.66 seconds
got prompt
Requested to load ZImageTEModel_
Unloaded partially: 5945.80 MB freed, 5793.77 MB remains loaded, 75.00 MB buffer reserved, lowvram patches: 0
loaded completely; 95367431640625005117571072.00 MB usable, 7672.25 MB loaded, full load: True
loaded completely; 95367431640625005117571072.00 MB usable, 11739.55 MB loaded, full load: True
100%|████████████████████████████████████████████████████████████████████████████████████| 9/9 [00:09<00:00, 1.03s/it]
Requested to load AutoencodingEngine
loaded completely; 95367431640625005117571072.00 MB usable, 159.87 MB loaded, full load: True
Prompt executed in 18.61 seconds
got prompt
100%|██████████████████████████████████████████████████████████████████████████████████| 15/15 [00:06<00:00, 2.24it/s]
Prompt executed in 7.78 seconds
got prompt
100%|██████████████████████████████████████████████████████████████████████████████████| 15/15 [00:06<00:00, 2.29it/s]
Prompt executed in 7.61 seconds
Beta Was this translation helpful? Give feedback.
All reactions