Skip to content

Getting error while trying to create a video #207

Open
@zetroxos

Description

Hello
Image

While trying to create a video, i keep getting error: Command '['/usr/bin/gcc', '/tmp/tmp6nvfb49v/main.c', '-O3', '-shared', '-fPIC', '-o', '/tmp/tmp6nvfb49v/cuda_utils.cpython-312-x86_64-linux-gnu.so', '-lcuda', '-L/root/venv/lib/python3.12/site-packages/triton/backends/nvidia/lib', '-L/lib/x86_64-linux-gnu', '-I/root/venv/lib/python3.12/site-packages/triton/backends/nvidia/include', '-I/tmp/tmp6nvfb49v', '-I/usr/include/python3.12']' returned non-zero exit status 1.

Anyone knows what causes it? Below full log:

ComfyUI Error Report

Error Details

  • Node ID: 3
  • Node Type: HyVideoSampler
  • Exception Type: subprocess.CalledProcessError
  • Exception Message: Command '['/usr/bin/gcc', '/tmp/tmp6nvfb49v/main.c', '-O3', '-shared', '-fPIC', '-o', '/tmp/tmp6nvfb49v/cuda_utils.cpython-312-x86_64-linux-gnu.so', '-lcuda', '-L/root/venv/lib/python3.12/site-packages/triton/backends/nvidia/lib', '-L/lib/x86_64-linux-gnu', '-I/root/venv/lib/python3.12/site-packages/triton/backends/nvidia/include', '-I/tmp/tmp6nvfb49v', '-I/usr/include/python3.12']' returned non-zero exit status 1.

Stack Trace

  File "/mnt/nvme/dev/ComfyUI/execution.py", line 327, in execute
    output_data, output_ui, has_subgraph = get_output_data(obj, input_data_all, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb)
                                           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

  File "/mnt/nvme/dev/ComfyUI/execution.py", line 202, in get_output_data
    return_values = _map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb)
                    ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

  File "/mnt/nvme/dev/ComfyUI/execution.py", line 174, in _map_node_over_list
    process_inputs(input_dict, i)

  File "/mnt/nvme/dev/ComfyUI/execution.py", line 163, in process_inputs
    results.append(getattr(obj, func)(**inputs))
                   ^^^^^^^^^^^^^^^^^^^^^^^^^^^^

  File "/mnt/nvme/dev/ComfyUI/custom_nodes/ComfyUI-HunyuanVideoWrapper/nodes.py", line 1242, in process
    out_latents = model["pipe"](
                  ^^^^^^^^^^^^^^

  File "/root/venv/lib/python3.12/site-packages/torch/utils/_contextlib.py", line 116, in decorate_context
    return func(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^

  File "/mnt/nvme/dev/ComfyUI/custom_nodes/ComfyUI-HunyuanVideoWrapper/hyvideo/diffusion/pipelines/pipeline_hunyuan_video.py", line 740, in __call__
    noise_pred = self.transformer(  # For an input image (129, 192, 336) (1, 256, 256)
                 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

  File "/root/venv/lib/python3.12/site-packages/torch/nn/modules/module.py", line 1736, in _wrapped_call_impl
    return self._call_impl(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

  File "/root/venv/lib/python3.12/site-packages/torch/nn/modules/module.py", line 1747, in _call_impl
    return forward_call(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

  File "/mnt/nvme/dev/ComfyUI/custom_nodes/ComfyUI-HunyuanVideoWrapper/hyvideo/modules/models.py", line 1051, in forward
    img, txt = _process_double_blocks(img, txt, vec, block_args)
               ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

  File "/mnt/nvme/dev/ComfyUI/custom_nodes/ComfyUI-HunyuanVideoWrapper/hyvideo/modules/models.py", line 894, in _process_double_blocks
    img, txt = block(img, txt, vec, *block_args)
               ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

  File "/root/venv/lib/python3.12/site-packages/torch/nn/modules/module.py", line 1736, in _wrapped_call_impl
    return self._call_impl(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

  File "/root/venv/lib/python3.12/site-packages/torch/nn/modules/module.py", line 1747, in _call_impl
    return forward_call(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

  File "/mnt/nvme/dev/ComfyUI/custom_nodes/ComfyUI-HunyuanVideoWrapper/hyvideo/modules/models.py", line 257, in forward
    attn = attention(
           ^^^^^^^^^^

  File "/mnt/nvme/dev/ComfyUI/custom_nodes/ComfyUI-HunyuanVideoWrapper/hyvideo/modules/attention.py", line 162, in attention
    x = sageattn_varlen_func(
        ^^^^^^^^^^^^^^^^^^^^^

  File "/root/venv/lib/python3.12/site-packages/torch/_dynamo/eval_frame.py", line 632, in _fn
    return fn(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^

  File "/mnt/nvme/dev/ComfyUI/custom_nodes/ComfyUI-HunyuanVideoWrapper/hyvideo/modules/attention.py", line 23, in sageattn_varlen_func
    return sageattn_varlen(q, k, v, cu_seqlens_q, cu_seqlens_kv, max_seqlen_q, max_seqlen_kv)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

  File "/root/venv/lib/python3.12/site-packages/sageattention/core.py", line 198, in sageattn_varlen
    q_int8, q_scale, k_int8, k_scale, cu_seqlens_q_scale, cu_seqlens_k_scale = per_block_int8_varlen(q, k, cu_seqlens_q, cu_seqlens_k, max_seqlen_q, max_seqlen_k, sm_scale=sm_scale)
                                                                               ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

  File "/root/venv/lib/python3.12/site-packages/sageattention/quant_per_block_varlen.py", line 69, in per_block_int8
    quant_per_block_int8_kernel[grid](

  File "/root/venv/lib/python3.12/site-packages/triton/runtime/jit.py", line 345, in <lambda>
    return lambda *args, **kwargs: self.run(grid=grid, warmup=False, *args, **kwargs)
                                   ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

  File "/root/venv/lib/python3.12/site-packages/triton/runtime/jit.py", line 607, in run
    device = driver.active.get_current_device()
             ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

  File "/root/venv/lib/python3.12/site-packages/triton/runtime/driver.py", line 23, in __getattr__
    self._initialize_obj()

  File "/root/venv/lib/python3.12/site-packages/triton/runtime/driver.py", line 20, in _initialize_obj
    self._obj = self._init_fn()
                ^^^^^^^^^^^^^^^

  File "/root/venv/lib/python3.12/site-packages/triton/runtime/driver.py", line 9, in _create_driver
    return actives[0]()
           ^^^^^^^^^^^^

  File "/root/venv/lib/python3.12/site-packages/triton/backends/nvidia/driver.py", line 371, in __init__
    self.utils = CudaUtils()  # TODO: make static
                 ^^^^^^^^^^^

  File "/root/venv/lib/python3.12/site-packages/triton/backends/nvidia/driver.py", line 80, in __init__
    mod = compile_module_from_src(Path(os.path.join(dirname, "driver.c")).read_text(), "cuda_utils")
          ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

  File "/root/venv/lib/python3.12/site-packages/triton/backends/nvidia/driver.py", line 57, in compile_module_from_src
    so = _build(name, src_path, tmpdir, library_dirs(), include_dir, libraries)
         ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

  File "/root/venv/lib/python3.12/site-packages/triton/runtime/build.py", line 48, in _build
    ret = subprocess.check_call(cc_cmd)
          ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

  File "/usr/lib/python3.12/subprocess.py", line 413, in check_call
    raise CalledProcessError(retcode, cmd)

System Information

  • ComfyUI Version: 0.3.12
  • Arguments: ./main.py --listen 10.0.0.103
  • OS: posix
  • Python Version: 3.12.3 (main, Jan 17 2025, 18:03:48) [GCC 13.3.0]
  • Embedded Python: false
  • PyTorch Version: 2.5.1+cu124

Devices

  • Name: cuda:0 NVIDIA GeForce RTX 4090 : cudaMallocAsync
    • Type: cuda
    • VRAM Total: 25383337984
    • VRAM Free: 1771341338
    • Torch VRAM Total: 23253221376
    • Torch VRAM Free: 231507482

Logs

2025-01-28T17:14:37.242681 - [START] Security scan2025-01-28T17:14:37.242696 - 
2025-01-28T17:14:38.044947 - [DONE] Security scan2025-01-28T17:14:38.044975 - 
2025-01-28T17:14:38.079760 - ## ComfyUI-Manager: installing dependencies done.2025-01-28T17:14:38.079796 - 
2025-01-28T17:14:38.079819 - ** ComfyUI startup time:2025-01-28T17:14:38.079834 -  2025-01-28T17:14:38.079853 - 2025-01-28 17:14:38.0792025-01-28T17:14:38.079868 - 
2025-01-28T17:14:38.079884 - ** Platform:2025-01-28T17:14:38.079898 -  2025-01-28T17:14:38.079911 - Linux2025-01-28T17:14:38.079924 - 
2025-01-28T17:14:38.079938 - ** Python version:2025-01-28T17:14:38.079951 -  2025-01-28T17:14:38.079965 - 3.12.3 (main, Jan 17 2025, 18:03:48) [GCC 13.3.0]2025-01-28T17:14:38.079978 - 
2025-01-28T17:14:38.079992 - ** Python executable:2025-01-28T17:14:38.080005 -  2025-01-28T17:14:38.080017 - /root/venv/bin/python32025-01-28T17:14:38.080030 - 
2025-01-28T17:14:38.080044 - ** ComfyUI Path:2025-01-28T17:14:38.080056 -  2025-01-28T17:14:38.080072 - /mnt/nvme/dev/ComfyUI2025-01-28T17:14:38.080093 - 
2025-01-28T17:14:38.080112 - ** User directory:2025-01-28T17:14:38.080125 -  2025-01-28T17:14:38.080138 - /mnt/nvme/dev/ComfyUI/user2025-01-28T17:14:38.080151 - 
2025-01-28T17:14:38.080165 - ** ComfyUI-Manager config path:2025-01-28T17:14:38.080180 -  2025-01-28T17:14:38.080193 - /mnt/nvme/dev/ComfyUI/user/default/ComfyUI-Manager/config.ini2025-01-28T17:14:38.080207 - 
2025-01-28T17:14:38.080225 - ** Log path:2025-01-28T17:14:38.080238 -  2025-01-28T17:14:38.080250 - /mnt/nvme/dev/ComfyUI/user/comfyui.log2025-01-28T17:14:38.080266 - 
2025-01-28T17:14:39.201453 - 
Prestartup times for custom nodes:
2025-01-28T17:14:39.201582 -    2.1 seconds: /mnt/nvme/dev/ComfyUI/custom_nodes/ComfyUI-Manager
2025-01-28T17:14:39.201624 - 
2025-01-28T17:14:40.394263 - Checkpoint files will always be loaded safely.
2025-01-28T17:14:40.505488 - Total VRAM 24207 MB, total RAM 128460 MB
2025-01-28T17:14:40.505584 - pytorch version: 2.5.1+cu124
2025-01-28T17:14:40.505805 - Set vram state to: NORMAL_VRAM
2025-01-28T17:14:40.506030 - Device: cuda:0 NVIDIA GeForce RTX 4090 : cudaMallocAsync
2025-01-28T17:14:41.281181 - Using pytorch attention
2025-01-28T17:14:41.970673 - [Prompt Server] web root: /mnt/nvme/dev/ComfyUI/web
2025-01-28T17:14:42.765335 - ### Loading: ComfyUI-Impact-Pack (V8.4.1)2025-01-28T17:14:42.765370 - 
2025-01-28T17:14:42.812079 - [Impact Pack] Wildcards loading done.2025-01-28T17:14:42.812160 - 
2025-01-28T17:14:42.872303 - ### Loading: ComfyUI-Inspire-Pack (V1.10)2025-01-28T17:14:42.872334 - 
2025-01-28T17:14:42.914620 - Total VRAM 24207 MB, total RAM 128460 MB
2025-01-28T17:14:42.914694 - pytorch version: 2.5.1+cu124
2025-01-28T17:14:42.914885 - Set vram state to: NORMAL_VRAM
2025-01-28T17:14:42.914981 - Device: cuda:0 NVIDIA GeForce RTX 4090 : cudaMallocAsync
2025-01-28T17:14:43.305501 - ------------------------------------------2025-01-28T17:14:43.305534 - 
2025-01-28T17:14:43.305568 - �[34mComfyroll Studio v1.76 : �[92m 175 Nodes Loaded�[0m2025-01-28T17:14:43.305583 - 
2025-01-28T17:14:43.305599 - ------------------------------------------2025-01-28T17:14:43.305613 - 
2025-01-28T17:14:43.305629 - ** For changes, please see patch notes at https://github.com/Suzie1/ComfyUI_Comfyroll_CustomNodes/blob/main/Patch_Notes.md2025-01-28T17:14:43.305642 - 
2025-01-28T17:14:43.305656 - ** For help, please see the wiki at https://github.com/Suzie1/ComfyUI_Comfyroll_CustomNodes/wiki2025-01-28T17:14:43.305670 - 
2025-01-28T17:14:43.305683 - ------------------------------------------2025-01-28T17:14:43.305696 - 
2025-01-28T17:14:43.309768 - ### Loading: ComfyUI-Manager (V3.9.2)
2025-01-28T17:14:43.360413 - ### ComfyUI Version: v0.3.12-20-gce557cfb | Released on '2025-01-23'
2025-01-28T17:14:43.367771 - MultiGPU: Initialization started
2025-01-28T17:14:43.367894 - MultiGPU: Initial device cuda:0
2025-01-28T17:14:43.368451 - MultiGPU: Initial offload device cuda:0
2025-01-28T17:14:43.368528 - MultiGPU: Starting core node registration
2025-01-28T17:14:43.368592 - MultiGPU: Registered core node UNETLoader
2025-01-28T17:14:43.368651 - MultiGPU: Registered core node VAELoader
2025-01-28T17:14:43.368702 - MultiGPU: Registered core node CLIPLoader
2025-01-28T17:14:43.368749 - MultiGPU: Registered core node DualCLIPLoader
2025-01-28T17:14:43.368796 - MultiGPU: Registered core node TripleCLIPLoader
2025-01-28T17:14:43.368842 - MultiGPU: Registered core node CheckpointLoaderSimple
2025-01-28T17:14:43.368891 - MultiGPU: Registered core node ControlNetLoader
2025-01-28T17:14:43.368929 - MultiGPU: Checking for module at custom_nodes/ComfyUI-LTXVideo
2025-01-28T17:14:43.369098 - MultiGPU: Module ComfyUI-LTXVideo not found - skipping
2025-01-28T17:14:43.369306 - MultiGPU: Checking for module at custom_nodes/ComfyUI-Florence2
2025-01-28T17:14:43.369346 - MultiGPU: Found ComfyUI-Florence2, creating compatible MultiGPU nodes
2025-01-28T17:14:43.369404 - MultiGPU: Registered Florence2ModelLoaderMultiGPU
2025-01-28T17:14:43.369580 - MultiGPU: Registered DownloadAndLoadFlorence2ModelMultiGPU
2025-01-28T17:14:43.369617 - MultiGPU: Checking for module at custom_nodes/ComfyUI_bitsandbytes_NF4
2025-01-28T17:14:43.369890 - MultiGPU: Module ComfyUI_bitsandbytes_NF4 not found - skipping
2025-01-28T17:14:43.369928 - MultiGPU: Checking for module at custom_nodes/x-flux-comfyui
2025-01-28T17:14:43.370062 - MultiGPU: Module x-flux-comfyui not found - skipping
2025-01-28T17:14:43.370106 - MultiGPU: Checking for module at custom_nodes/ComfyUI-MMAudio
2025-01-28T17:14:43.370507 - MultiGPU: Module ComfyUI-MMAudio not found - skipping
2025-01-28T17:14:43.370907 - MultiGPU: Checking for module at custom_nodes/ComfyUI-GGUF
2025-01-28T17:14:43.371162 - MultiGPU: Module ComfyUI-GGUF not found - skipping
2025-01-28T17:14:43.371210 - MultiGPU: Checking for module at custom_nodes/PuLID_ComfyUI
2025-01-28T17:14:43.371260 - MultiGPU: Module PuLID_ComfyUI not found - skipping
2025-01-28T17:14:43.371304 - MultiGPU: Checking for module at custom_nodes/ComfyUI-HunyuanVideoWrapper
2025-01-28T17:14:43.371412 - MultiGPU: Found ComfyUI-HunyuanVideoWrapper, creating compatible MultiGPU nodes
2025-01-28T17:14:43.371701 - MultiGPU: Registered HyVideoModelLoader nodes
2025-01-28T17:14:43.371958 - MultiGPU: Registered HyVideoVAELoaderMultiGPU
2025-01-28T17:14:43.372024 - MultiGPU: Registered DownloadAndLoadHyVideoTextEncoderMultiGPU
2025-01-28T17:14:43.372080 - MultiGPU: Registration complete. Final mappings: DeviceSelectorMultiGPU, UNETLoaderMultiGPU, VAELoaderMultiGPU, CLIPLoaderMultiGPU, DualCLIPLoaderMultiGPU, TripleCLIPLoaderMultiGPU, CheckpointLoaderSimpleMultiGPU, ControlNetLoaderMultiGPU, Florence2ModelLoaderMultiGPU, DownloadAndLoadFlorence2ModelMultiGPU, HyVideoModelLoaderMultiGPU, HyVideoModelLoaderDiffSynthMultiGPU, HyVideoVAELoaderMultiGPU, DownloadAndLoadHyVideoTextEncoderMultiGPU
2025-01-28T17:14:43.390544 - /mnt/nvme/dev/ComfyUI2025-01-28T17:14:43.390573 - 
2025-01-28T17:14:43.390593 - ############################################2025-01-28T17:14:43.390610 - 
2025-01-28T17:14:43.390659 - /mnt/nvme/dev/ComfyUI/custom_nodes/ComfyUI-NAI-styler/CSV2025-01-28T17:14:43.390677 - 
2025-01-28T17:14:43.390693 - ############################################2025-01-28T17:14:43.390708 - 
2025-01-28T17:14:43.390840 - []2025-01-28T17:14:43.390859 - 
2025-01-28T17:14:43.390876 - ############################################2025-01-28T17:14:43.390896 - 
2025-01-28T17:14:43.591416 - [ComfyUI-Manager] default cache updated: https://raw.githubusercontent.com/ltdrdata/ComfyUI-Manager/main/model-list.json
2025-01-28T17:14:43.603854 - [ComfyUI-Manager] default cache updated: https://raw.githubusercontent.com/ltdrdata/ComfyUI-Manager/main/alter-list.json
2025-01-28T17:14:43.668389 - [ComfyUI-Manager] default cache updated: https://raw.githubusercontent.com/ltdrdata/ComfyUI-Manager/main/github-stats.json
2025-01-28T17:14:43.731969 - [ComfyUI-Manager] default cache updated: https://raw.githubusercontent.com/ltdrdata/ComfyUI-Manager/main/extension-node-map.json
2025-01-28T17:14:43.764937 - [ComfyUI-Manager] default cache updated: https://raw.githubusercontent.com/ltdrdata/ComfyUI-Manager/main/custom-node-list.json
2025-01-28T17:14:43.990260 - �[34mWAS Node Suite: �[0mOpenCV Python FFMPEG support is enabled�[0m2025-01-28T17:14:43.990303 - 
2025-01-28T17:14:43.990349 - �[34mWAS Node Suite �[93mWarning: �[0m`ffmpeg_bin_path` is not set in `/mnt/nvme/dev/ComfyUI/custom_nodes/pr-was-node-suite-comfyui-47064894/was_suite_config.json` config file. Will attempt to use system ffmpeg binaries if available.�[0m2025-01-28T17:14:43.990384 - 
2025-01-28T17:14:44.466048 - �[34mWAS Node Suite: �[0mFinished.�[0m �[32mLoaded�[0m �[0m218�[0m �[32mnodes successfully.�[0m2025-01-28T17:14:44.466143 - 
2025-01-28T17:14:44.466195 - 
	�[3m�[93m"Art is the bridge that connects imagination to reality."�[0m�[3m - Unknown�[0m
2025-01-28T17:14:44.466227 - 
2025-01-28T17:14:44.467787 - 
Import times for custom nodes:
2025-01-28T17:14:44.467902 -    0.0 seconds: /mnt/nvme/dev/ComfyUI/custom_nodes/websocket_image_save.py
2025-01-28T17:14:44.467974 -    0.0 seconds: /mnt/nvme/dev/ComfyUI/custom_nodes/text-utility
2025-01-28T17:14:44.468041 -    0.0 seconds: /mnt/nvme/dev/ComfyUI/custom_nodes/ComfyUI-Universal-Styler
2025-01-28T17:14:44.468116 -    0.0 seconds: /mnt/nvme/dev/ComfyUI/custom_nodes/ComfyUI-Image-Saver
2025-01-28T17:14:44.468178 -    0.0 seconds: /mnt/nvme/dev/ComfyUI/custom_nodes/ComfyUI-Florence2
2025-01-28T17:14:44.468235 -    0.0 seconds: /mnt/nvme/dev/ComfyUI/custom_nodes/ComfyUI_UltimateSDUpscale
2025-01-28T17:14:44.468294 -    0.0 seconds: /mnt/nvme/dev/ComfyUI/custom_nodes/ComfyUI-MultiGPU
2025-01-28T17:14:44.468353 -    0.0 seconds: /mnt/nvme/dev/ComfyUI/custom_nodes/efficiency-nodes-comfyui
2025-01-28T17:14:44.468410 -    0.0 seconds: /mnt/nvme/dev/ComfyUI/custom_nodes/ComfyUI-KJNodes
2025-01-28T17:14:44.468466 -    0.0 seconds: /mnt/nvme/dev/ComfyUI/custom_nodes/ComfyUI-Inspire-Pack
2025-01-28T17:14:44.468535 -    0.0 seconds: /mnt/nvme/dev/ComfyUI/custom_nodes/ComfyUI-Impact-Pack
2025-01-28T17:14:44.468590 -    0.1 seconds: /mnt/nvme/dev/ComfyUI/custom_nodes/ComfyUI-VideoHelperSuite
2025-01-28T17:14:44.468643 -    0.1 seconds: /mnt/nvme/dev/ComfyUI/custom_nodes/ComfyUI-Manager
2025-01-28T17:14:44.468695 -    0.4 seconds: /mnt/nvme/dev/ComfyUI/custom_nodes/ComfyUI_Comfyroll_CustomNodes
2025-01-28T17:14:44.468747 -    0.5 seconds: /mnt/nvme/dev/ComfyUI/custom_nodes/ComfyUI-HunyuanVideoWrapper
2025-01-28T17:14:44.468798 -    1.1 seconds: /mnt/nvme/dev/ComfyUI/custom_nodes/pr-was-node-suite-comfyui-47064894
2025-01-28T17:14:44.468851 - 
2025-01-28T17:14:44.478651 - Starting server

2025-01-28T17:14:44.479000 - To see the GUI go to: http://10.0.0.103:8188
2025-01-28T17:14:49.926871 - FETCH ComfyRegistry Data: 5/312025-01-28T17:14:49.926949 - 
2025-01-28T17:14:56.495376 - FETCH ComfyRegistry Data: 10/312025-01-28T17:14:56.495455 - 
2025-01-28T17:15:04.397215 - FETCH ComfyRegistry Data: 15/312025-01-28T17:15:04.397628 - 
2025-01-28T17:15:04.803570 - FETCH DATA from: /mnt/nvme/dev/ComfyUI/custom_nodes/ComfyUI-Manager/extension-node-map.json2025-01-28T17:15:04.803639 - 2025-01-28T17:15:04.817927 -  [DONE]2025-01-28T17:15:04.818003 - 
2025-01-28T17:15:05.684839 - [Inspire Pack] IPAdapterPlus is not installed.2025-01-28T17:15:05.684867 - 
2025-01-28T17:15:05.708677 - Error. No naistyles.csv found. Put your naistyles.csv in the custom_nodes/ComfyUI_NAI-mod/CSV directory of ComfyUI. Then press "Refresh".
                  Your current root directory is: /mnt/nvme/dev/ComfyUI
            2025-01-28T17:15:05.708701 - 
2025-01-28T17:15:05.708725 - Error. No naistyles.csv found. Put your naistyles.csv in the custom_nodes/ComfyUI_NAI-mod/CSV directory of ComfyUI. Then press "Refresh".
                  Your current root directory is: /mnt/nvme/dev/ComfyUI
            2025-01-28T17:15:05.708738 - 
2025-01-28T17:15:05.708760 - Error. No naistyles.csv found. Put your naistyles.csv in the custom_nodes/ComfyUI_NAI-mod/CSV directory of ComfyUI. Then press "Refresh".
                  Your current root directory is: /mnt/nvme/dev/ComfyUI
            2025-01-28T17:15:05.708790 - 
2025-01-28T17:15:05.708816 - Error. No naistyles.csv found. Put your naistyles.csv in the custom_nodes/ComfyUI_NAI-mod/CSV directory of ComfyUI. Then press "Refresh".
                  Your current root directory is: /mnt/nvme/dev/ComfyUI
            2025-01-28T17:15:05.708830 - 
2025-01-28T17:15:05.708849 - Error. No naistyles.csv found. Put your naistyles.csv in the custom_nodes/ComfyUI_NAI-mod/CSV directory of ComfyUI. Then press "Refresh".
                  Your current root directory is: /mnt/nvme/dev/ComfyUI
            2025-01-28T17:15:05.708863 - 
2025-01-28T17:15:05.708881 - Error. No naistyles.csv found. Put your naistyles.csv in the custom_nodes/ComfyUI_NAI-mod/CSV directory of ComfyUI. Then press "Refresh".
                  Your current root directory is: /mnt/nvme/dev/ComfyUI
            2025-01-28T17:15:05.708895 - 
2025-01-28T17:15:11.168987 - FETCH ComfyRegistry Data: 20/312025-01-28T17:15:11.169068 - 
2025-01-28T17:15:14.344806 - got prompt
2025-01-28T17:15:16.152317 - Loading text encoder model (clipL) from: /mnt/nvme/dev/ComfyUI/models/clip/clip-vit-large-patch14
2025-01-28T17:15:16.245735 - Text encoder to dtype: torch.float16
2025-01-28T17:15:16.290473 - Loading tokenizer (clipL) from: /mnt/nvme/dev/ComfyUI/models/clip/clip-vit-large-patch14
2025-01-28T17:15:16.380133 - Loading text encoder model (llm) from: /mnt/nvme/dev/ComfyUI/models/LLM/llava-llama-3-8b-text-encoder-tokenizer
2025-01-28T17:15:17.928174 - 
Loading checkpoint shards:  25%|▎| 1/4 [00:01<00:04,  1.32025-01-28T17:15:18.055683 - FETCH ComfyRegistry Data: 25/312025-01-28T17:15:18.055765 - 
2025-01-28T17:15:20.409274 - 
Loading checkpoint shards: 100%|█| 4/4 [00:03<00:00,  1.22025-01-28T17:15:20.409458 - 
Loading checkpoint shards: 100%|█| 4/4 [00:03<00:00,  1.02025-01-28T17:15:20.409554 - 
2025-01-28T17:15:20.434973 - Text encoder to dtype: torch.float16
2025-01-28T17:15:20.435121 - Loading tokenizer (llm) from: /mnt/nvme/dev/ComfyUI/models/LLM/llava-llama-3-8b-text-encoder-tokenizer
2025-01-28T17:15:21.142069 - llm prompt attention_mask shape: torch.Size([1, 161]), masked tokens: 19
2025-01-28T17:15:24.183195 - clipL prompt attention_mask shape: torch.Size([1, 77]), masked tokens: 22
2025-01-28T17:15:25.779014 - FETCH ComfyRegistry Data: 30/312025-01-28T17:15:25.779107 - 
2025-01-28T17:15:26.831989 - model_type FLOW
2025-01-28T17:15:26.846258 - The config attributes {'use_flow_sigmas': True, 'prediction_type': 'flow_prediction'} were passed to FlowMatchDiscreteScheduler, but are not expected and will be ignored. Please verify your scheduler_config.json configuration file.
2025-01-28T17:15:26.846460 - Scheduler config:2025-01-28T17:15:26.846486 -  2025-01-28T17:15:26.846513 - FrozenDict({'num_train_timesteps': 1000, 'flow_shift': 9.0, 'reverse': True, 'solver': 'euler', 'n_tokens': None, '_use_default_values': ['n_tokens', 'num_train_timesteps']})2025-01-28T17:15:26.846567 - 
2025-01-28T17:15:26.848315 - Using accelerate to load and assign model weights to device...
2025-01-28T17:15:27.097315 - Loading LoRA: czarna100 with strength: 1.0
2025-01-28T17:15:27.128447 - Requested to load HyVideoModel
2025-01-28T17:15:27.349832 - FETCH ComfyRegistry Data [DONE]2025-01-28T17:15:27.349925 - 
2025-01-28T17:15:27.451213 - [ComfyUI-Manager] default cache updated: https://api.comfy.org/nodes
2025-01-28T17:15:27.514229 - nightly_channel: 
https://raw.githubusercontent.com/ltdrdata/ComfyUI-Manage
r/main/remote
2025-01-28T17:15:27.514336 - FETCH DATA from: https://raw.githubusercontent.com/ltdrdata/ComfyUI-Manager/main/custom-node-list.json2025-01-28T17:15:27.514414 - 2025-01-28T17:15:27.870734 -  [DONE]2025-01-28T17:15:27.870829 - 
2025-01-28T17:15:28.623124 - loaded completely 9393.082815933227 9393.082763671875 False
2025-01-28T17:15:28.628577 - Input (height, width, video_length) = (512, 512, 33)
2025-01-28T17:15:28.628823 - The config attributes {'use_flow_sigmas': True, 'prediction_type': 'flow_prediction'} were passed to FlowMatchDiscreteScheduler, but are not expected and will be ignored. Please verify your scheduler_config.json configuration file.
2025-01-28T17:15:28.629076 - Scheduler config:2025-01-28T17:15:28.629219 -  2025-01-28T17:15:28.629269 - FrozenDict({'num_train_timesteps': 1000, 'flow_shift': 9.0, 'reverse': True, 'solver': 'euler', 'n_tokens': None, '_use_default_values': ['n_tokens', 'num_train_timesteps']})2025-01-28T17:15:28.629286 - 
2025-01-28T17:15:28.631181 - Swapping 20 double blocks and 0 single blocks2025-01-28T17:15:28.631204 - 
2025-01-28T17:15:28.876592 - Sampling 33 frames in 9 latents at 512x512 with 20 inference steps
2025-01-28T17:15:28.876948 - 
  0%|                             | 0/20 [00:00<?, ?it/s]2025-01-28T17:15:29.031011 - 
  0%|                             | 0/20 [00:00<?, ?it/s]2025-01-28T17:15:29.031043 - 
2025-01-28T17:15:29.054434 - !!! Exception during processing !!! Command '['/usr/bin/gcc', '/tmp/tmp9z1ao3mc/main.c', '-O3', '-shared', '-fPIC', '-o', '/tmp/tmp9z1ao3mc/cuda_utils.cpython-312-x86_64-linux-gnu.so', '-lcuda', '-L/root/venv/lib/python3.12/site-packages/triton/backends/nvidia/lib', '-L/lib/x86_64-linux-gnu', '-I/root/venv/lib/python3.12/site-packages/triton/backends/nvidia/include', '-I/tmp/tmp9z1ao3mc', '-I/usr/include/python3.12']' returned non-zero exit status 1.
2025-01-28T17:15:29.057932 - Traceback (most recent call last):
  File "/mnt/nvme/dev/ComfyUI/execution.py", line 327, in execute
    output_data, output_ui, has_subgraph = get_output_data(obj, input_data_all, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb)
                                           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/mnt/nvme/dev/ComfyUI/execution.py", line 202, in get_output_data
    return_values = _map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb)
                    ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/mnt/nvme/dev/ComfyUI/execution.py", line 174, in _map_node_over_list
    process_inputs(input_dict, i)
  File "/mnt/nvme/dev/ComfyUI/execution.py", line 163, in process_inputs
    results.append(getattr(obj, func)(**inputs))
                   ^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/mnt/nvme/dev/ComfyUI/custom_nodes/ComfyUI-HunyuanVideoWrapper/nodes.py", line 1242, in process
    out_latents = model["pipe"](
                  ^^^^^^^^^^^^^^
  File "/root/venv/lib/python3.12/site-packages/torch/utils/_contextlib.py", line 116, in decorate_context
    return func(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^
  File "/mnt/nvme/dev/ComfyUI/custom_nodes/ComfyUI-HunyuanVideoWrapper/hyvideo/diffusion/pipelines/pipeline_hunyuan_video.py", line 740, in __call__
    noise_pred = self.transformer(  # For an input image (129, 192, 336) (1, 256, 256)
                 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/root/venv/lib/python3.12/site-packages/torch/nn/modules/module.py", line 1736, in _wrapped_call_impl
    return self._call_impl(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/root/venv/lib/python3.12/site-packages/torch/nn/modules/module.py", line 1747, in _call_impl
    return forward_call(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/mnt/nvme/dev/ComfyUI/custom_nodes/ComfyUI-HunyuanVideoWrapper/hyvideo/modules/models.py", line 1051, in forward
    img, txt = _process_double_blocks(img, txt, vec, block_args)
               ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/mnt/nvme/dev/ComfyUI/custom_nodes/ComfyUI-HunyuanVideoWrapper/hyvideo/modules/models.py", line 894, in _process_double_blocks
    img, txt = block(img, txt, vec, *block_args)
               ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/root/venv/lib/python3.12/site-packages/torch/nn/modules/module.py", line 1736, in _wrapped_call_impl
    return self._call_impl(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/root/venv/lib/python3.12/site-packages/torch/nn/modules/module.py", line 1747, in _call_impl
    return forward_call(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/mnt/nvme/dev/ComfyUI/custom_nodes/ComfyUI-HunyuanVideoWrapper/hyvideo/modules/models.py", line 257, in forward
    attn = attention(
           ^^^^^^^^^^
  File "/mnt/nvme/dev/ComfyUI/custom_nodes/ComfyUI-HunyuanVideoWrapper/hyvideo/modules/attention.py", line 162, in attention
    x = sageattn_varlen_func(
        ^^^^^^^^^^^^^^^^^^^^^
  File "/root/venv/lib/python3.12/site-packages/torch/_dynamo/eval_frame.py", line 632, in _fn
    return fn(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^
  File "/mnt/nvme/dev/ComfyUI/custom_nodes/ComfyUI-HunyuanVideoWrapper/hyvideo/modules/attention.py", line 23, in sageattn_varlen_func
    return sageattn_varlen(q, k, v, cu_seqlens_q, cu_seqlens_kv, max_seqlen_q, max_seqlen_kv)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/root/venv/lib/python3.12/site-packages/sageattention/core.py", line 198, in sageattn_varlen
    q_int8, q_scale, k_int8, k_scale, cu_seqlens_q_scale, cu_seqlens_k_scale = per_block_int8_varlen(q, k, cu_seqlens_q, cu_seqlens_k, max_seqlen_q, max_seqlen_k, sm_scale=sm_scale)
                                                                               ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/root/venv/lib/python3.12/site-packages/sageattention/quant_per_block_varlen.py", line 69, in per_block_int8
    quant_per_block_int8_kernel[grid](
  File "/root/venv/lib/python3.12/site-packages/triton/runtime/jit.py", line 345, in <lambda>
    return lambda *args, **kwargs: self.run(grid=grid, warmup=False, *args, **kwargs)
                                   ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/root/venv/lib/python3.12/site-packages/triton/runtime/jit.py", line 607, in run
    device = driver.active.get_current_device()
             ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/root/venv/lib/python3.12/site-packages/triton/runtime/driver.py", line 23, in __getattr__
    self._initialize_obj()
  File "/root/venv/lib/python3.12/site-packages/triton/runtime/driver.py", line 20, in _initialize_obj
    self._obj = self._init_fn()
                ^^^^^^^^^^^^^^^
  File "/root/venv/lib/python3.12/site-packages/triton/runtime/driver.py", line 9, in _create_driver
    return actives[0]()
           ^^^^^^^^^^^^
  File "/root/venv/lib/python3.12/site-packages/triton/backends/nvidia/driver.py", line 371, in __init__
    self.utils = CudaUtils()  # TODO: make static
                 ^^^^^^^^^^^
  File "/root/venv/lib/python3.12/site-packages/triton/backends/nvidia/driver.py", line 80, in __init__
    mod = compile_module_from_src(Path(os.path.join(dirname, "driver.c")).read_text(), "cuda_utils")
          ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/root/venv/lib/python3.12/site-packages/triton/backends/nvidia/driver.py", line 57, in compile_module_from_src
    so = _build(name, src_path, tmpdir, library_dirs(), include_dir, libraries)
         ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/root/venv/lib/python3.12/site-packages/triton/runtime/build.py", line 48, in _build
    ret = subprocess.check_call(cc_cmd)
          ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/usr/lib/python3.12/subprocess.py", line 413, in check_call
    raise CalledProcessError(retcode, cmd)
subprocess.CalledProcessError: Command '['/usr/bin/gcc', '/tmp/tmp9z1ao3mc/main.c', '-O3', '-shared', '-fPIC', '-o', '/tmp/tmp9z1ao3mc/cuda_utils.cpython-312-x86_64-linux-gnu.so', '-lcuda', '-L/root/venv/lib/python3.12/site-packages/triton/backends/nvidia/lib', '-L/lib/x86_64-linux-gnu', '-I/root/venv/lib/python3.12/site-packages/triton/backends/nvidia/include', '-I/tmp/tmp9z1ao3mc', '-I/usr/include/python3.12']' returned non-zero exit status 1.

2025-01-28T17:15:29.059869 - Prompt executed in 14.71 seconds
2025-01-28T17:24:37.342956 - got prompt
2025-01-28T17:24:37.352456 - Input (height, width, video_length) = (512, 512, 33)
2025-01-28T17:24:37.352805 - The config attributes {'use_flow_sigmas': True, 'prediction_type': 'flow_prediction'} were passed to FlowMatchDiscreteScheduler, but are not expected and will be ignored. Please verify your scheduler_config.json configuration file.
2025-01-28T17:24:37.353066 - Scheduler config:2025-01-28T17:24:37.353118 -  2025-01-28T17:24:37.353181 - FrozenDict({'num_train_timesteps': 1000, 'flow_shift': 9.0, 'reverse': True, 'solver': 'euler', 'n_tokens': None, '_use_default_values': ['n_tokens', 'num_train_timesteps']})2025-01-28T17:24:37.353247 - 
2025-01-28T17:24:37.355980 - Swapping 20 double blocks and 0 single blocks2025-01-28T17:24:37.356015 - 
2025-01-28T17:24:37.613808 - Sampling 33 frames in 9 latents at 512x512 with 20 inference steps
2025-01-28T17:24:37.614119 - 
  0%|                             | 0/20 [00:00<?, ?it/s]2025-01-28T17:24:37.659407 - 
  0%|                             | 0/20 [00:00<?, ?it/s]2025-01-28T17:24:37.659438 - 
2025-01-28T17:24:37.672402 - !!! Exception during processing !!! Command '['/usr/bin/gcc', '/tmp/tmpa8z8zvdk/main.c', '-O3', '-shared', '-fPIC', '-o', '/tmp/tmpa8z8zvdk/cuda_utils.cpython-312-x86_64-linux-gnu.so', '-lcuda', '-L/root/venv/lib/python3.12/site-packages/triton/backends/nvidia/lib', '-L/lib/x86_64-linux-gnu', '-I/root/venv/lib/python3.12/site-packages/triton/backends/nvidia/include', '-I/tmp/tmpa8z8zvdk', '-I/usr/include/python3.12']' returned non-zero exit status 1.
2025-01-28T17:24:37.674284 - Traceback (most recent call last):
  File "/mnt/nvme/dev/ComfyUI/execution.py", line 327, in execute
    output_data, output_ui, has_subgraph = get_output_data(obj, input_data_all, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb)
                                           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/mnt/nvme/dev/ComfyUI/execution.py", line 202, in get_output_data
    return_values = _map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb)
                    ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/mnt/nvme/dev/ComfyUI/execution.py", line 174, in _map_node_over_list
    process_inputs(input_dict, i)
  File "/mnt/nvme/dev/ComfyUI/execution.py", line 163, in process_inputs
    results.append(getattr(obj, func)(**inputs))
                   ^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/mnt/nvme/dev/ComfyUI/custom_nodes/ComfyUI-HunyuanVideoWrapper/nodes.py", line 1242, in process
    out_latents = model["pipe"](
                  ^^^^^^^^^^^^^^
  File "/root/venv/lib/python3.12/site-packages/torch/utils/_contextlib.py", line 116, in decorate_context
    return func(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^
  File "/mnt/nvme/dev/ComfyUI/custom_nodes/ComfyUI-HunyuanVideoWrapper/hyvideo/diffusion/pipelines/pipeline_hunyuan_video.py", line 740, in __call__
    noise_pred = self.transformer(  # For an input image (129, 192, 336) (1, 256, 256)
                 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/root/venv/lib/python3.12/site-packages/torch/nn/modules/module.py", line 1736, in _wrapped_call_impl
    return self._call_impl(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/root/venv/lib/python3.12/site-packages/torch/nn/modules/module.py", line 1747, in _call_impl
    return forward_call(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/mnt/nvme/dev/ComfyUI/custom_nodes/ComfyUI-HunyuanVideoWrapper/hyvideo/modules/models.py", line 1051, in forward
    img, txt = _process_double_blocks(img, txt, vec, block_args)
               ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/mnt/nvme/dev/ComfyUI/custom_nodes/ComfyUI-HunyuanVideoWrapper/hyvideo/modules/models.py", line 894, in _process_double_blocks
    img, txt = block(img, txt, vec, *block_args)
               ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/root/venv/lib/python3.12/site-packages/torch/nn/modules/module.py", line 1736, in _wrapped_call_impl
    return self._call_impl(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/root/venv/lib/python3.12/site-packages/torch/nn/modules/module.py", line 1747, in _call_impl
    return forward_call(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/mnt/nvme/dev/ComfyUI/custom_nodes/ComfyUI-HunyuanVideoWrapper/hyvideo/modules/models.py", line 257, in forward
    attn = attention(
           ^^^^^^^^^^
  File "/mnt/nvme/dev/ComfyUI/custom_nodes/ComfyUI-HunyuanVideoWrapper/hyvideo/modules/attention.py", line 162, in attention
    x = sageattn_varlen_func(
        ^^^^^^^^^^^^^^^^^^^^^
  File "/root/venv/lib/python3.12/site-packages/torch/_dynamo/eval_frame.py", line 632, in _fn
    return fn(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^
  File "/mnt/nvme/dev/ComfyUI/custom_nodes/ComfyUI-HunyuanVideoWrapper/hyvideo/modules/attention.py", line 23, in sageattn_varlen_func
    return sageattn_varlen(q, k, v, cu_seqlens_q, cu_seqlens_kv, max_seqlen_q, max_seqlen_kv)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/root/venv/lib/python3.12/site-packages/sageattention/core.py", line 198, in sageattn_varlen
    q_int8, q_scale, k_int8, k_scale, cu_seqlens_q_scale, cu_seqlens_k_scale = per_block_int8_varlen(q, k, cu_seqlens_q, cu_seqlens_k, max_seqlen_q, max_seqlen_k, sm_scale=sm_scale)
                                                                               ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/root/venv/lib/python3.12/site-packages/sageattention/quant_per_block_varlen.py", line 69, in per_block_int8
    quant_per_block_int8_kernel[grid](
  File "/root/venv/lib/python3.12/site-packages/triton/runtime/jit.py", line 345, in <lambda>
    return lambda *args, **kwargs: self.run(grid=grid, warmup=False, *args, **kwargs)
                                   ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/root/venv/lib/python3.12/site-packages/triton/runtime/jit.py", line 607, in run
    device = driver.active.get_current_device()
             ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/root/venv/lib/python3.12/site-packages/triton/runtime/driver.py", line 23, in __getattr__
    self._initialize_obj()
  File "/root/venv/lib/python3.12/site-packages/triton/runtime/driver.py", line 20, in _initialize_obj
    self._obj = self._init_fn()
                ^^^^^^^^^^^^^^^
  File "/root/venv/lib/python3.12/site-packages/triton/runtime/driver.py", line 9, in _create_driver
    return actives[0]()
           ^^^^^^^^^^^^
  File "/root/venv/lib/python3.12/site-packages/triton/backends/nvidia/driver.py", line 371, in __init__
    self.utils = CudaUtils()  # TODO: make static
                 ^^^^^^^^^^^
  File "/root/venv/lib/python3.12/site-packages/triton/backends/nvidia/driver.py", line 80, in __init__
    mod = compile_module_from_src(Path(os.path.join(dirname, "driver.c")).read_text(), "cuda_utils")
          ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/root/venv/lib/python3.12/site-packages/triton/backends/nvidia/driver.py", line 57, in compile_module_from_src
    so = _build(name, src_path, tmpdir, library_dirs(), include_dir, libraries)
         ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/root/venv/lib/python3.12/site-packages/triton/runtime/build.py", line 48, in _build
    ret = subprocess.check_call(cc_cmd)
          ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/usr/lib/python3.12/subprocess.py", line 413, in check_call
    raise CalledProcessError(retcode, cmd)
subprocess.CalledProcessError: Command '['/usr/bin/gcc', '/tmp/tmpa8z8zvdk/main.c', '-O3', '-shared', '-fPIC', '-o', '/tmp/tmpa8z8zvdk/cuda_utils.cpython-312-x86_64-linux-gnu.so', '-lcuda', '-L/root/venv/lib/python3.12/site-packages/triton/backends/nvidia/lib', '-L/lib/x86_64-linux-gnu', '-I/root/venv/lib/python3.12/site-packages/triton/backends/nvidia/include', '-I/tmp/tmpa8z8zvdk', '-I/usr/include/python3.12']' returned non-zero exit status 1.

2025-01-28T17:24:37.676180 - Prompt executed in 0.33 seconds
2025-01-28T17:29:05.910949 - got prompt
2025-01-28T17:29:05.920568 - Input (height, width, video_length) = (512, 512, 33)
2025-01-28T17:29:05.920917 - The config attributes {'use_flow_sigmas': True, 'prediction_type': 'flow_prediction'} were passed to FlowMatchDiscreteScheduler, but are not expected and will be ignored. Please verify your scheduler_config.json configuration file.
2025-01-28T17:29:05.921264 - Scheduler config:2025-01-28T17:29:05.921391 -  2025-01-28T17:29:05.921474 - FrozenDict({'num_train_timesteps': 1000, 'flow_shift': 9.0, 'reverse': True, 'solver': 'euler', 'n_tokens': None, '_use_default_values': ['n_tokens', 'num_train_timesteps']})2025-01-28T17:29:05.921526 - 
2025-01-28T17:29:05.925724 - Swapping 20 double blocks and 0 single blocks2025-01-28T17:29:05.925776 - 
2025-01-28T17:29:06.192125 - Sampling 33 frames in 9 latents at 512x512 with 20 inference steps
2025-01-28T17:29:06.192411 - 
  0%|                             | 0/20 [00:00<?, ?it/s]2025-01-28T17:29:06.239819 - 
  0%|                             | 0/20 [00:00<?, ?it/s]2025-01-28T17:29:06.239852 - 
2025-01-28T17:29:06.253025 - !!! Exception during processing !!! Command '['/usr/bin/gcc', '/tmp/tmp6nvfb49v/main.c', '-O3', '-shared', '-fPIC', '-o', '/tmp/tmp6nvfb49v/cuda_utils.cpython-312-x86_64-linux-gnu.so', '-lcuda', '-L/root/venv/lib/python3.12/site-packages/triton/backends/nvidia/lib', '-L/lib/x86_64-linux-gnu', '-I/root/venv/lib/python3.12/site-packages/triton/backends/nvidia/include', '-I/tmp/tmp6nvfb49v', '-I/usr/include/python3.12']' returned non-zero exit status 1.
2025-01-28T17:29:06.254847 - Traceback (most recent call last):
  File "/mnt/nvme/dev/ComfyUI/execution.py", line 327, in execute
    output_data, output_ui, has_subgraph = get_output_data(obj, input_data_all, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb)
                                           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/mnt/nvme/dev/ComfyUI/execution.py", line 202, in get_output_data
    return_values = _map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb)
                    ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/mnt/nvme/dev/ComfyUI/execution.py", line 174, in _map_node_over_list
    process_inputs(input_dict, i)
  File "/mnt/nvme/dev/ComfyUI/execution.py", line 163, in process_inputs
    results.append(getattr(obj, func)(**inputs))
                   ^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/mnt/nvme/dev/ComfyUI/custom_nodes/ComfyUI-HunyuanVideoWrapper/nodes.py", line 1242, in process
    out_latents = model["pipe"](
                  ^^^^^^^^^^^^^^
  File "/root/venv/lib/python3.12/site-packages/torch/utils/_contextlib.py", line 116, in decorate_context
    return func(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^
  File "/mnt/nvme/dev/ComfyUI/custom_nodes/ComfyUI-HunyuanVideoWrapper/hyvideo/diffusion/pipelines/pipeline_hunyuan_video.py", line 740, in __call__
    noise_pred = self.transformer(  # For an input image (129, 192, 336) (1, 256, 256)
                 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/root/venv/lib/python3.12/site-packages/torch/nn/modules/module.py", line 1736, in _wrapped_call_impl
    return self._call_impl(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/root/venv/lib/python3.12/site-packages/torch/nn/modules/module.py", line 1747, in _call_impl
    return forward_call(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/mnt/nvme/dev/ComfyUI/custom_nodes/ComfyUI-HunyuanVideoWrapper/hyvideo/modules/models.py", line 1051, in forward
    img, txt = _process_double_blocks(img, txt, vec, block_args)
               ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/mnt/nvme/dev/ComfyUI/custom_nodes/ComfyUI-HunyuanVideoWrapper/hyvideo/modules/models.py", line 894, in _process_double_blocks
    img, txt = block(img, txt, vec, *block_args)
               ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/root/venv/lib/python3.12/site-packages/torch/nn/modules/module.py", line 1736, in _wrapped_call_impl
    return self._call_impl(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/root/venv/lib/python3.12/site-packages/torch/nn/modules/module.py", line 1747, in _call_impl
    return forward_call(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/mnt/nvme/dev/ComfyUI/custom_nodes/ComfyUI-HunyuanVideoWrapper/hyvideo/modules/models.py", line 257, in forward
    attn = attention(
           ^^^^^^^^^^
  File "/mnt/nvme/dev/ComfyUI/custom_nodes/ComfyUI-HunyuanVideoWrapper/hyvideo/modules/attention.py", line 162, in attention
    x = sageattn_varlen_func(
        ^^^^^^^^^^^^^^^^^^^^^
  File "/root/venv/lib/python3.12/site-packages/torch/_dynamo/eval_frame.py", line 632, in _fn
    return fn(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^
  File "/mnt/nvme/dev/ComfyUI/custom_nodes/ComfyUI-HunyuanVideoWrapper/hyvideo/modules/attention.py", line 23, in sageattn_varlen_func
    return sageattn_varlen(q, k, v, cu_seqlens_q, cu_seqlens_kv, max_seqlen_q, max_seqlen_kv)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/root/venv/lib/python3.12/site-packages/sageattention/core.py", line 198, in sageattn_varlen
    q_int8, q_scale, k_int8, k_scale, cu_seqlens_q_scale, cu_seqlens_k_scale = per_block_int8_varlen(q, k, cu_seqlens_q, cu_seqlens_k, max_seqlen_q, max_seqlen_k, sm_scale=sm_scale)
                                                                               ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/root/venv/lib/python3.12/site-packages/sageattention/quant_per_block_varlen.py", line 69, in per_block_int8
    quant_per_block_int8_kernel[grid](
  File "/root/venv/lib/python3.12/site-packages/triton/runtime/jit.py", line 345, in <lambda>
    return lambda *args, **kwargs: self.run(grid=grid, warmup=False, *args, **kwargs)
                                   ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/root/venv/lib/python3.12/site-packages/triton/runtime/jit.py", line 607, in run
    device = driver.active.get_current_device()
             ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/root/venv/lib/python3.12/site-packages/triton/runtime/driver.py", line 23, in __getattr__
    self._initialize_obj()
  File "/root/venv/lib/python3.12/site-packages/triton/runtime/driver.py", line 20, in _initialize_obj
    self._obj = self._init_fn()
                ^^^^^^^^^^^^^^^
  File "/root/venv/lib/python3.12/site-packages/triton/runtime/driver.py", line 9, in _create_driver
    return actives[0]()
           ^^^^^^^^^^^^
  File "/root/venv/lib/python3.12/site-packages/triton/backends/nvidia/driver.py", line 371, in __init__
    self.utils = CudaUtils()  # TODO: make static
                 ^^^^^^^^^^^
  File "/root/venv/lib/python3.12/site-packages/triton/backends/nvidia/driver.py", line 80, in __init__
    mod = compile_module_from_src(Path(os.path.join(dirname, "driver.c")).read_text(), "cuda_utils")
          ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/root/venv/lib/python3.12/site-packages/triton/backends/nvidia/driver.py", line 57, in compile_module_from_src
    so = _build(name, src_path, tmpdir, library_dirs(), include_dir, libraries)
         ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/root/venv/lib/python3.12/site-packages/triton/runtime/build.py", line 48, in _build
    ret = subprocess.check_call(cc_cmd)
          ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/usr/lib/python3.12/subprocess.py", line 413, in check_call
    raise CalledProcessError(retcode, cmd)
subprocess.CalledProcessError: Command '['/usr/bin/gcc', '/tmp/tmp6nvfb49v/main.c', '-O3', '-shared', '-fPIC', '-o', '/tmp/tmp6nvfb49v/cuda_utils.cpython-312-x86_64-linux-gnu.so', '-lcuda', '-L/root/venv/lib/python3.12/site-packages/triton/backends/nvidia/lib', '-L/lib/x86_64-linux-gnu', '-I/root/venv/lib/python3.12/site-packages/triton/backends/nvidia/include', '-I/tmp/tmp6nvfb49v', '-I/usr/include/python3.12']' returned non-zero exit status 1.

2025-01-28T17:29:06.256691 - Prompt executed in 0.34 seconds

Attached Workflow

Please make sure that workflow does not contain any sensitive information such as API keys or passwords.

{"last_node_id":41,"last_link_id":45,"nodes":[{"id":35,"type":"HyVideoBlockSwap","pos":[-351,-44],"size":[315,130],"flags":{},"order":0,"mode":0,"inputs":[],"outputs":[{"name":"block_swap_args","type":"BLOCKSWAPARGS","links":[43]}],"properties":{"Node name for S&R":"HyVideoBlockSwap"},"widgets_values":[20,0,true,true]},{"id":5,"type":"HyVideoDecode","pos":[920,-279],"size":[345.4285888671875,150],"flags":{},"order":7,"mode":0,"inputs":[{"name":"vae","type":"VAE","link":6},{"name":"samples","type":"LATENT","link":4}],"outputs":[{"name":"images","type":"IMAGE","links":[42],"slot_index":0}],"properties":{"Node name for S&R":"HyVideoDecode"},"widgets_values":[true,64,128,false]},{"id":7,"type":"HyVideoVAELoader","pos":[442,-282],"size":[379.166748046875,82],"flags":{},"order":1,"mode":0,"inputs":[{"name":"compile_args","type":"COMPILEARGS","link":null,"shape":7}],"outputs":[{"name":"vae","type":"VAE","links":[6],"slot_index":0}],"properties":{"Node name for S&R":"HyVideoVAELoader"},"widgets_values":["hunyuan_video_vae_bf16.safetensors","bf16"]},{"id":16,"type":"DownloadAndLoadHyVideoTextEncoder","pos":[-300,380],"size":[441,178],"flags":{},"order":2,"mode":0,"inputs":[],"outputs":[{"name":"hyvid_text_encoder","type":"HYVIDTEXTENCODER","links":[35]}],"properties":{"Node name for S&R":"DownloadAndLoadHyVideoTextEncoder"},"widgets_values":["Kijai/llava-llama-3-8b-text-encoder-tokenizer","openai/clip-vit-large-patch14","fp16",false,2,"bnb_nf4"]},{"id":30,"type":"HyVideoTextEncode","pos":[210,380],"size":[437.631591796875,274.0087890625],"flags":{},"order":4,"mode":0,"inputs":[{"name":"text_encoders","type":"HYVIDTEXTENCODER","link":35},{"name":"custom_prompt_template","type":"PROMPT_TEMPLATE","link":null,"shape":7},{"name":"clip_l","type":"CLIP","link":null,"shape":7},{"name":"hyvid_cfg","type":"HYVID_CFG","link":null,"shape":7}],"outputs":[{"name":"hyvid_embeds","type":"HYVIDEMBEDS","links":[36]}],"properties":{"Node name for S&R":"HyVideoTextEncode"},"widgets_values":["woman sits calmly in a chair, speaking entheusiawith natural and realistic motion.","bad quality video","video"]},{"id":1,"type":"HyVideoModelLoader","pos":[65.60003662109375,-33.4000244140625],"size":[509.7506103515625,242],"flags":{},"order":5,"mode":0,"inputs":[{"name":"compile_args","type":"COMPILEARGS","link":null,"shape":7},{"name":"block_swap_args","type":"BLOCKSWAPARGS","link":43,"shape":7},{"name":"lora","type":"HYVIDLORA","link":45,"shape":7}],"outputs":[{"name":"model","type":"HYVIDEOMODEL","links":[2],"slot_index":0}],"properties":{"Node name for S&R":"HyVideoModelLoader"},"widgets_values":["hunyuan_video_720_cfgdistill_fp8_e4m3fn.safetensors","bf16","fp8_e4m3fn","offload_device","sageattn_varlen",false,true]},{"id":34,"type":"VHS_VideoCombine","pos":[1336.1998291015625,-179.08001708984375],"size":[371.7926940917969,699.792724609375],"flags":{},"order":8,"mode":0,"inputs":[{"name":"images","type":"IMAGE","link":42},{"name":"audio","type":"AUDIO","link":null,"shape":7},{"name":"meta_batch","type":"VHS_BatchManager","link":null,"shape":7},{"name":"vae","type":"VAE","link":null,"shape":7}],"outputs":[{"name":"Filenames","type":"VHS_FILENAMES","links":null}],"properties":{"Node name for S&R":"VHS_VideoCombine"},"widgets_values":{"frame_rate":24,"loop_count":0,"filename_prefix":"HunyuanVideo","format":"video/h264-mp4","pix_fmt":"yuv420p","crf":19,"save_metadata":true,"trim_to_audio":false,"pingpong":false,"save_output":true,"videopreview":{"hidden":false,"paused":false,"params":{"filename":"HunyuanVideo_00015.mp4","subfolder":"","type":"output","format":"video/h264-mp4","frame_rate":24,"workflow":"HunyuanVideo_00015.png","fullpath":"/mnt/nvme/dev/ComfyUI/output/HunyuanVideo_00015.mp4"},"muted":false}}},{"id":41,"type":"HyVideoLoraSelect","pos":[43.26377487182617,-226.8912811279297],"size":[315,102],"flags":{},"order":3,"mode":0,"inputs":[{"name":"prev_lora","type":"HYVIDLORA","link":null,"shape":7},{"name":"blocks","type":"SELECTEDBLOCKS","link":null,"shape":7}],"outputs":[{"name":"lora","type":"HYVIDLORA","links":[45]}],"properties":{"Node name for S&R":"HyVideoLoraSelect"},"widgets_values":["czarna100.safetensors",1]},{"id":3,"type":"HyVideoSampler","pos":[696.85107421875,-34.40348815917969],"size":[315,418],"flags":{},"order":6,"mode":0,"inputs":[{"name":"model","type":"HYVIDEOMODEL","link":2},{"name":"hyvid_embeds","type":"HYVIDEMBEDS","link":36},{"name":"samples","type":"LATENT","link":null,"shape":7},{"name":"stg_args","type":"STGARGS","link":null,"shape":7},{"name":"context_options","type":"HYVIDCONTEXT","link":null,"shape":7},{"name":"feta_args","type":"FETAARGS","link":null,"shape":7},{"name":"teacache_args","type":"TEACACHEARGS","link":null,"shape":7}],"outputs":[{"name":"samples","type":"LATENT","links":[4],"slot_index":0}],"properties":{"Node name for S&R":"HyVideoSampler"},"widgets_values":[512,512,33,20,6,9,2,"fixed",1,1,"FlowMatchDiscreteScheduler"]}],"links":[[2,1,0,3,0,"HYVIDEOMODEL"],[4,3,0,5,1,"LATENT"],[6,7,0,5,0,"VAE"],[35,16,0,30,0,"HYVIDTEXTENCODER"],[36,30,0,3,1,"HYVIDEMBEDS"],[42,5,0,34,0,"IMAGE"],[43,35,0,1,1,"BLOCKSWAPARGS"],[45,41,0,1,2,"HYVIDLORA"]],"groups":[],"config":{},"extra":{"ds":{"scale":0.7513148009015777,"offset":[581.7648570622553,494.9512370928687]},"VHS_latentpreview":false,"VHS_latentpreviewrate":0,"node_versions":{"ComfyUI-HunyuanVideoWrapper":"88823b79b7e41377e4dccf0790719578e139bbf3","ComfyUI-VideoHelperSuite":"f24f4e10f448913eb8c0d8ce5ff6190a8be84454"}},"version":0.4}

Additional Context

(Please add any additional context or steps to reproduce the error here)

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions