-
Notifications
You must be signed in to change notification settings - Fork 11.9k
Closed
Labels
Potential BugUser is reporting a bug. This should be tested.User is reporting a bug. This should be tested.
Description
Custom Node Testing
- I have tried disabling custom nodes and the issue persists (see how to disable custom nodes if you need help)
Expected Behavior
Try to run the #8669 work flow and output a fox girl waveing hand.
Actual Behavior
comfyui version is ec70ed6
ubuntu22.04
nvdia 4090
nvcc: NVIDIA (R) Cuda compiler driver
Copyright (c) 2005-2023 NVIDIA Corporation
Built on Mon_Apr__3_17:16:06_PDT_2023
Cuda compilation tools, release 12.1, V12.1.105
Build cuda_12.1.r12.1/compiler.32688072_0
torch 2.5.1+cu121
Steps to Reproduce
I disable all custom node to run the #8669 work flow, and generate error image.
Debug Logs
Checkpoint files will always be loaded safely.
Total VRAM 24111 MB, total RAM 80390 MB
pytorch version: 2.5.1+cu121
xformers version: 0.0.28.post3
Set vram state to: NORMAL_VRAM
Device: cuda:0 NVIDIA GeForce RTX 4090 : cudaMallocAsync
Using xformers attention
Python version: 3.10.0 (default, Mar 3 2022, 09:58:08) [GCC 7.5.0]
ComfyUI version: 0.3.41
ComfyUI frontend version: 1.22.2
[Prompt Server] web root: /home/cuda12/anaconda3/envs/comfyui/lib/python3.10/site-packages/comfyui_frontend_package/static
Context impl SQLiteImpl.
Will assume non-transactional DDL.
No target revision found.
Starting server
To see the GUI go to: http://0.0.0.0:8188
To see the GUI go to: http://[::]:8188
got prompt
Using xformers attention in VAE
Using xformers attention in VAE
VAE load device: cuda:0, offload device: cpu, dtype: torch.bfloat16
model weight dtype torch.float16, manual cast: None
model_type FLOW
CLIP/text encoder model load device: cuda:0, offload device: cpu, current: cpu, dtype: torch.float16
Requested to load Omnigen2TEModel_
loaded completely 22095.55 5885.9609375 True
Requested to load AutoencodingEngine
loaded completely 13394.664058685303 159.87335777282715 True
Requested to load Omnigen2
loaded completely 15677.446700912475 7566.73583984375 True
100%|███████████████████████████████████████████████████████████████| 20/20 [00:29<00:00, 1.48s/it]
Prompt executed in 59.02 seconds
got prompt
Using xformers attention in VAE
Using xformers attention in VAE
VAE load device: cuda:0, offload device: cpu, dtype: torch.bfloat16
Requested to load AutoencodingEngine
loaded completely 5811.900371551514 159.87335777282715 True
100%|███████████████████████████████████████████████████████████████| 20/20 [00:29<00:00, 1.49s/it]
Prompt executed in 32.05 secondsOther
No response
Reactions are currently unavailable
Metadata
Metadata
Assignees
Labels
Potential BugUser is reporting a bug. This should be tested.User is reporting a bug. This should be tested.
