Skip to content

OmniGen2 Image Gen Error #8670

@Hazukiaoi

Description

@Hazukiaoi

Custom Node Testing

Expected Behavior

Try to run the #8669 work flow and output a fox girl waveing hand.

Actual Behavior

comfyui version is ec70ed6
ubuntu22.04
nvdia 4090
nvcc: NVIDIA (R) Cuda compiler driver
Copyright (c) 2005-2023 NVIDIA Corporation
Built on Mon_Apr__3_17:16:06_PDT_2023
Cuda compilation tools, release 12.1, V12.1.105
Build cuda_12.1.r12.1/compiler.32688072_0

torch 2.5.1+cu121

Error Look Like:
Image

Steps to Reproduce

I disable all custom node to run the #8669 work flow, and generate error image.

Debug Logs

Checkpoint files will always be loaded safely.
Total VRAM 24111 MB, total RAM 80390 MB
pytorch version: 2.5.1+cu121
xformers version: 0.0.28.post3
Set vram state to: NORMAL_VRAM
Device: cuda:0 NVIDIA GeForce RTX 4090 : cudaMallocAsync
Using xformers attention
Python version: 3.10.0 (default, Mar  3 2022, 09:58:08) [GCC 7.5.0]
ComfyUI version: 0.3.41
ComfyUI frontend version: 1.22.2
[Prompt Server] web root: /home/cuda12/anaconda3/envs/comfyui/lib/python3.10/site-packages/comfyui_frontend_package/static
Context impl SQLiteImpl.
Will assume non-transactional DDL.
No target revision found.
Starting server

To see the GUI go to: http://0.0.0.0:8188
To see the GUI go to: http://[::]:8188
got prompt
Using xformers attention in VAE
Using xformers attention in VAE
VAE load device: cuda:0, offload device: cpu, dtype: torch.bfloat16
model weight dtype torch.float16, manual cast: None
model_type FLOW
CLIP/text encoder model load device: cuda:0, offload device: cpu, current: cpu, dtype: torch.float16
Requested to load Omnigen2TEModel_
loaded completely 22095.55 5885.9609375 True
Requested to load AutoencodingEngine
loaded completely 13394.664058685303 159.87335777282715 True
Requested to load Omnigen2
loaded completely 15677.446700912475 7566.73583984375 True
100%|███████████████████████████████████████████████████████████████| 20/20 [00:29<00:00,  1.48s/it]
Prompt executed in 59.02 seconds
got prompt
Using xformers attention in VAE
Using xformers attention in VAE
VAE load device: cuda:0, offload device: cpu, dtype: torch.bfloat16
Requested to load AutoencodingEngine
loaded completely 5811.900371551514 159.87335777282715 True
100%|███████████████████████████████████████████████████████████████| 20/20 [00:29<00:00,  1.49s/it]
Prompt executed in 32.05 seconds

Other

No response

Metadata

Metadata

Assignees

No one assigned

    Labels

    Potential BugUser is reporting a bug. This should be tested.

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions