Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

HyVideoSampler expected str, bytes or os.PathLike object, not NoneType #123

Open
Zarrette opened this issue Dec 12, 2024 · 0 comments
Open

Comments

@Zarrette
Copy link

ComfyUI Error Report

Error Details

  • Node ID: 3
  • Node Type: HyVideoSampler
  • Exception Type: TypeError
  • Exception Message: expected str, bytes or os.PathLike object, not NoneType

Stack Trace

  File "C:\ComfyUI_windows_portable_nvidia\ComfyUI_windows_portable\ComfyUI\execution.py", line 323, in execute
    output_data, output_ui, has_subgraph = get_output_data(obj, input_data_all, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb)
                                           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

  File "C:\ComfyUI_windows_portable_nvidia\ComfyUI_windows_portable\ComfyUI\execution.py", line 198, in get_output_data
    return_values = _map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb)
                    ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

  File "C:\ComfyUI_windows_portable_nvidia\ComfyUI_windows_portable\ComfyUI\execution.py", line 169, in _map_node_over_list
    process_inputs(input_dict, i)

  File "C:\ComfyUI_windows_portable_nvidia\ComfyUI_windows_portable\ComfyUI\execution.py", line 158, in process_inputs
    results.append(getattr(obj, func)(**inputs))
                   ^^^^^^^^^^^^^^^^^^^^^^^^^^^^

  File "C:\ComfyUI_windows_portable_nvidia\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI-HunyuanVideoWrapper\nodes.py", line 823, in process
    out_latents = model["pipe"](
                  ^^^^^^^^^^^^^^

  File "C:\ComfyUI_windows_portable_nvidia\ComfyUI_windows_portable\python_embeded\Lib\site-packages\torch\utils\_contextlib.py", line 116, in decorate_context
    return func(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^

  File "C:\ComfyUI_windows_portable_nvidia\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI-HunyuanVideoWrapper\hyvideo\diffusion\pipelines\pipeline_hunyuan_video.py", line 569, in __call__
    noise_pred = self.transformer(  # For an input image (129, 192, 336) (1, 256, 256)
                 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

  File "C:\ComfyUI_windows_portable_nvidia\ComfyUI_windows_portable\python_embeded\Lib\site-packages\torch\nn\modules\module.py", line 1736, in _wrapped_call_impl
    return self._call_impl(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

  File "C:\ComfyUI_windows_portable_nvidia\ComfyUI_windows_portable\python_embeded\Lib\site-packages\torch\nn\modules\module.py", line 1747, in _call_impl
    return forward_call(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

  File "C:\ComfyUI_windows_portable_nvidia\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI-HunyuanVideoWrapper\hyvideo\modules\models.py", line 752, in forward
    img, txt = block(*double_block_args)
               ^^^^^^^^^^^^^^^^^^^^^^^^^

  File "C:\ComfyUI_windows_portable_nvidia\ComfyUI_windows_portable\python_embeded\Lib\site-packages\torch\nn\modules\module.py", line 1736, in _wrapped_call_impl
    return self._call_impl(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

  File "C:\ComfyUI_windows_portable_nvidia\ComfyUI_windows_portable\python_embeded\Lib\site-packages\torch\nn\modules\module.py", line 1747, in _call_impl
    return forward_call(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

  File "C:\ComfyUI_windows_portable_nvidia\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI-HunyuanVideoWrapper\hyvideo\modules\models.py", line 205, in forward
    attn = attention(
           ^^^^^^^^^^

  File "C:\ComfyUI_windows_portable_nvidia\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI-HunyuanVideoWrapper\hyvideo\modules\attention.py", line 162, in attention
    x = sageattn_varlen_func(
        ^^^^^^^^^^^^^^^^^^^^^

  File "C:\ComfyUI_windows_portable_nvidia\ComfyUI_windows_portable\python_embeded\Lib\site-packages\torch\_dynamo\eval_frame.py", line 632, in _fn
    return fn(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^

  File "C:\ComfyUI_windows_portable_nvidia\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI-HunyuanVideoWrapper\hyvideo\modules\attention.py", line 23, in sageattn_varlen_func
    return sageattn_varlen(q, k, v, cu_seqlens_q, cu_seqlens_kv, max_seqlen_q, max_seqlen_kv)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

  File "C:\ComfyUI_windows_portable_nvidia\ComfyUI_windows_portable\python_embeded\sageattention\core.py", line 198, in sageattn_varlen
    q_int8, q_scale, k_int8, k_scale, cu_seqlens_q_scale, cu_seqlens_k_scale = per_block_int8_varlen(q, k, cu_seqlens_q, cu_seqlens_k, max_seqlen_q, max_seqlen_k, sm_scale=sm_scale)
                                                                               ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

  File "C:\ComfyUI_windows_portable_nvidia\ComfyUI_windows_portable\python_embeded\sageattention\quant_per_block_varlen.py", line 69, in per_block_int8
    quant_per_block_int8_kernel[grid](

  File "C:\ComfyUI_windows_portable_nvidia\ComfyUI_windows_portable\python_embeded\triton\runtime\jit.py", line 345, in <lambda>
    return lambda *args, **kwargs: self.run(grid=grid, warmup=False, *args, **kwargs)
                                   ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

  File "C:\ComfyUI_windows_portable_nvidia\ComfyUI_windows_portable\python_embeded\triton\runtime\jit.py", line 662, in run
    kernel = self.compile(
             ^^^^^^^^^^^^^

  File "C:\ComfyUI_windows_portable_nvidia\ComfyUI_windows_portable\python_embeded\triton\compiler\compiler.py", line 244, in compile
    key = f"{triton_key()}-{src.hash()}-{backend.hash()}-{options.hash()}-{str(sorted(env_vars.items()))}"
                                         ^^^^^^^^^^^^^^

  File "C:\ComfyUI_windows_portable_nvidia\ComfyUI_windows_portable\python_embeded\triton\backends\nvidia\compiler.py", line 336, in hash
    version = get_ptxas_version()
              ^^^^^^^^^^^^^^^^^^^

  File "C:\ComfyUI_windows_portable_nvidia\ComfyUI_windows_portable\python_embeded\triton\backends\nvidia\compiler.py", line 38, in get_ptxas_version
    version = subprocess.check_output([_path_to_binary("ptxas")[0], "--version"]).decode("utf-8")
                                       ^^^^^^^^^^^^^^^^^^^^^^^^

  File "C:\ComfyUI_windows_portable_nvidia\ComfyUI_windows_portable\python_embeded\triton\backends\nvidia\compiler.py", line 23, in _path_to_binary
    os.path.join(os.environ.get("CUDA_PATH"), "bin", binary),
    ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

  File "<frozen ntpath>", line 108, in join

System Information

  • ComfyUI Version: v0.3.7-18-g3dfdddc
  • Arguments: ComfyUI\main.py --windows-standalone-build
  • OS: nt
  • Python Version: 3.12.7 (tags/v3.12.7:0b05ead, Oct 1 2024, 03:06:41) [MSC v.1941 64 bit (AMD64)]
  • Embedded Python: true
  • PyTorch Version: 2.5.1+cu124

Devices

  • Name: cuda:0 NVIDIA GeForce RTX 4080 : cudaMallocAsync
    • Type: cuda
    • VRAM Total: 17170956288
    • VRAM Free: 1708188166
    • Torch VRAM Total: 13723762688
    • Torch VRAM Free: 54313478

Logs

2024-12-12T11:12:36.713480 - [START] Security scan2024-12-12T11:12:36.713480 - 
2024-12-12T11:12:38.307241 - [DONE] Security scan2024-12-12T11:12:38.307241 - 
2024-12-12T11:12:38.418082 - ## ComfyUI-Manager: installing dependencies done.2024-12-12T11:12:38.418082 - 
2024-12-12T11:12:38.418082 - ** ComfyUI startup time:2024-12-12T11:12:38.418082 -  2024-12-12T11:12:38.418082 - 2024-12-12 11:12:38.4180822024-12-12T11:12:38.418082 - 
2024-12-12T11:12:38.445742 - ** Platform:2024-12-12T11:12:38.445742 -  2024-12-12T11:12:38.445742 - Windows2024-12-12T11:12:38.445742 - 
2024-12-12T11:12:38.445742 - ** Python version:2024-12-12T11:12:38.445742 -  2024-12-12T11:12:38.445742 - 3.12.7 (tags/v3.12.7:0b05ead, Oct  1 2024, 03:06:41) [MSC v.1941 64 bit (AMD64)]2024-12-12T11:12:38.445742 - 
2024-12-12T11:12:38.445742 - ** Python executable:2024-12-12T11:12:38.445742 -  2024-12-12T11:12:38.445742 - C:\ComfyUI_windows_portable_nvidia\ComfyUI_windows_portable\python_embeded\python.exe2024-12-12T11:12:38.445742 - 
2024-12-12T11:12:38.445742 - ** ComfyUI Path:2024-12-12T11:12:38.445742 -  2024-12-12T11:12:38.445742 - C:\ComfyUI_windows_portable_nvidia\ComfyUI_windows_portable\ComfyUI2024-12-12T11:12:38.445742 - 
2024-12-12T11:12:38.445742 - ** Log path:2024-12-12T11:12:38.445742 -  2024-12-12T11:12:38.445742 - C:\ComfyUI_windows_portable_nvidia\ComfyUI_windows_portable\comfyui.log2024-12-12T11:12:38.445742 - 
2024-12-12T11:12:39.109383 - 
Prestartup times for custom nodes:2024-12-12T11:12:39.109383 - 
2024-12-12T11:12:39.110401 -    2.4 seconds:2024-12-12T11:12:39.110401 -  2024-12-12T11:12:39.110401 - C:\ComfyUI_windows_portable_nvidia\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI-Manager2024-12-12T11:12:39.110401 - 
2024-12-12T11:12:39.110401 - 
2024-12-12T11:12:41.360867 - Total VRAM 16376 MB, total RAM 31958 MB
2024-12-12T11:12:41.360867 - pytorch version: 2.5.1+cu124
2024-12-12T11:12:41.362056 - Set vram state to: NORMAL_VRAM
2024-12-12T11:12:41.362056 - Device: cuda:0 NVIDIA GeForce RTX 4080 : cudaMallocAsync
2024-12-12T11:12:42.583115 - Using pytorch cross attention
2024-12-12T11:12:44.041721 - [Prompt Server] web root: C:\ComfyUI_windows_portable_nvidia\ComfyUI_windows_portable\ComfyUI\web
2024-12-12T11:12:44.897456 - ### Loading: ComfyUI-Manager (V2.55.4)2024-12-12T11:12:44.898462 - 
2024-12-12T11:12:44.979892 - ### ComfyUI Revision: 2908 [3dfdddcc] *DETACHED | Released on '2024-12-11'2024-12-12T11:12:44.979892 - 
2024-12-12T11:12:45.716776 - 
Import times for custom nodes:
2024-12-12T11:12:45.716776 -    0.0 seconds: C:\ComfyUI_windows_portable_nvidia\ComfyUI_windows_portable\ComfyUI\custom_nodes\websocket_image_save.py
2024-12-12T11:12:45.716776 -    0.0 seconds: C:\ComfyUI_windows_portable_nvidia\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI_ExtraModels
2024-12-12T11:12:45.716776 -    0.2 seconds: C:\ComfyUI_windows_portable_nvidia\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI-Manager
2024-12-12T11:12:45.716776 -    0.3 seconds: C:\ComfyUI_windows_portable_nvidia\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI-HunyuanVideoWrapper
2024-12-12T11:12:45.716776 -    0.5 seconds: C:\ComfyUI_windows_portable_nvidia\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI-VideoHelperSuite
2024-12-12T11:12:45.716776 - 
2024-12-12T11:12:45.721855 - Starting server

2024-12-12T11:12:45.721855 - To see the GUI go to: http://127.0.0.1:8188
2024-12-12T11:12:45.976457 - [ComfyUI-Manager] default cache updated: https://raw.githubusercontent.com/ltdrdata/ComfyUI-Manager/main/alter-list.json2024-12-12T11:12:45.976457 - 
2024-12-12T11:12:45.985523 - [ComfyUI-Manager] default cache updated: https://raw.githubusercontent.com/ltdrdata/ComfyUI-Manager/main/model-list.json2024-12-12T11:12:45.985523 - 
2024-12-12T11:12:46.171334 - [ComfyUI-Manager] default cache updated: https://raw.githubusercontent.com/ltdrdata/ComfyUI-Manager/main/github-stats.json2024-12-12T11:12:46.171334 - 
2024-12-12T11:12:46.442636 - [ComfyUI-Manager] default cache updated: https://raw.githubusercontent.com/ltdrdata/ComfyUI-Manager/main/extension-node-map.json2024-12-12T11:12:46.443140 - 
2024-12-12T11:12:46.555312 - [ComfyUI-Manager] default cache updated: https://raw.githubusercontent.com/ltdrdata/ComfyUI-Manager/main/custom-node-list.json2024-12-12T11:12:46.555312 - 
2024-12-12T11:12:46.702854 - FETCH DATA from: C:\ComfyUI_windows_portable_nvidia\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI-Manager\extension-node-map.json2024-12-12T11:12:46.702854 - 2024-12-12T11:12:46.711404 -  [DONE]2024-12-12T11:12:46.711404 - 
2024-12-12T11:12:53.107285 - got prompt
2024-12-12T11:12:54.242967 - Loading text encoder model (clipL) from: C:\ComfyUI_windows_portable_nvidia\ComfyUI_windows_portable\ComfyUI\models\clip\clip-vit-large-patch14
2024-12-12T11:12:54.552666 - Text encoder to dtype: torch.float16
2024-12-12T11:12:54.637523 - Loading tokenizer (clipL) from: C:\ComfyUI_windows_portable_nvidia\ComfyUI_windows_portable\ComfyUI\models\clip\clip-vit-large-patch14
2024-12-12T11:12:54.728127 - Loading text encoder model (llm) from: C:\ComfyUI_windows_portable_nvidia\ComfyUI_windows_portable\ComfyUI\models\LLM\llava-llama-3-8b-text-encoder-tokenizer
2024-12-12T11:13:29.505592 - 
Loading checkpoint shards: 100%|█████████████████████████████████████████████████████████| 4/4 [00:34<00:00,  9.05s/it]2024-12-12T11:13:29.511080 - 
Loading checkpoint shards: 100%|█████████████████████████████████████████████████████████| 4/4 [00:34<00:00,  8.64s/it]2024-12-12T11:13:29.511080 - 
2024-12-12T11:13:55.636771 - Text encoder to dtype: torch.float16
2024-12-12T11:13:58.634735 - Loading tokenizer (llm) from: C:\ComfyUI_windows_portable_nvidia\ComfyUI_windows_portable\ComfyUI\models\LLM\llava-llama-3-8b-text-encoder-tokenizer
2024-12-12T11:14:00.539074 - llm prompt attention_mask shape: torch.Size([1, 161]), masked tokens: 24
2024-12-12T11:14:05.406008 - clipL prompt attention_mask shape: torch.Size([1, 77]), masked tokens: 25
2024-12-12T11:14:05.665758 - Using accelerate to load and assign model weights to device...
2024-12-12T11:14:17.678691 - Scheduler config:2024-12-12T11:14:17.678691 -  2024-12-12T11:14:17.679694 - FrozenDict({'num_train_timesteps': 1000, 'shift': 9.0, 'reverse': True, 'solver': 'euler', 'n_tokens': None, '_use_default_values': ['num_train_timesteps', 'n_tokens']})2024-12-12T11:14:17.679694 - 
2024-12-12T11:14:17.685731 - Input (height, width, video_length) = (224, 416, 77)
2024-12-12T11:14:18.032581 - Sampling 77 frames in 20 latents at 416x224 with 25 inference steps
2024-12-12T11:14:18.036055 - 
  0%|                                                                                           | 0/25 [00:00<?, ?it/s]2024-12-12T11:14:18.514051 - 
  0%|                                                                                           | 0/25 [00:00<?, ?it/s]2024-12-12T11:14:18.514051 - 
2024-12-12T11:14:18.576957 - !!! Exception during processing !!! expected str, bytes or os.PathLike object, not NoneType
2024-12-12T11:14:18.589410 - Traceback (most recent call last):
  File "C:\ComfyUI_windows_portable_nvidia\ComfyUI_windows_portable\ComfyUI\execution.py", line 323, in execute
    output_data, output_ui, has_subgraph = get_output_data(obj, input_data_all, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb)
                                           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "C:\ComfyUI_windows_portable_nvidia\ComfyUI_windows_portable\ComfyUI\execution.py", line 198, in get_output_data
    return_values = _map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb)
                    ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "C:\ComfyUI_windows_portable_nvidia\ComfyUI_windows_portable\ComfyUI\execution.py", line 169, in _map_node_over_list
    process_inputs(input_dict, i)
  File "C:\ComfyUI_windows_portable_nvidia\ComfyUI_windows_portable\ComfyUI\execution.py", line 158, in process_inputs
    results.append(getattr(obj, func)(**inputs))
                   ^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "C:\ComfyUI_windows_portable_nvidia\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI-HunyuanVideoWrapper\nodes.py", line 823, in process
    out_latents = model["pipe"](
                  ^^^^^^^^^^^^^^
  File "C:\ComfyUI_windows_portable_nvidia\ComfyUI_windows_portable\python_embeded\Lib\site-packages\torch\utils\_contextlib.py", line 116, in decorate_context
    return func(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^
  File "C:\ComfyUI_windows_portable_nvidia\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI-HunyuanVideoWrapper\hyvideo\diffusion\pipelines\pipeline_hunyuan_video.py", line 569, in __call__
    noise_pred = self.transformer(  # For an input image (129, 192, 336) (1, 256, 256)
                 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "C:\ComfyUI_windows_portable_nvidia\ComfyUI_windows_portable\python_embeded\Lib\site-packages\torch\nn\modules\module.py", line 1736, in _wrapped_call_impl
    return self._call_impl(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "C:\ComfyUI_windows_portable_nvidia\ComfyUI_windows_portable\python_embeded\Lib\site-packages\torch\nn\modules\module.py", line 1747, in _call_impl
    return forward_call(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "C:\ComfyUI_windows_portable_nvidia\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI-HunyuanVideoWrapper\hyvideo\modules\models.py", line 752, in forward
    img, txt = block(*double_block_args)
               ^^^^^^^^^^^^^^^^^^^^^^^^^
  File "C:\ComfyUI_windows_portable_nvidia\ComfyUI_windows_portable\python_embeded\Lib\site-packages\torch\nn\modules\module.py", line 1736, in _wrapped_call_impl
    return self._call_impl(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "C:\ComfyUI_windows_portable_nvidia\ComfyUI_windows_portable\python_embeded\Lib\site-packages\torch\nn\modules\module.py", line 1747, in _call_impl
    return forward_call(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "C:\ComfyUI_windows_portable_nvidia\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI-HunyuanVideoWrapper\hyvideo\modules\models.py", line 205, in forward
    attn = attention(
           ^^^^^^^^^^
  File "C:\ComfyUI_windows_portable_nvidia\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI-HunyuanVideoWrapper\hyvideo\modules\attention.py", line 162, in attention
    x = sageattn_varlen_func(
        ^^^^^^^^^^^^^^^^^^^^^
  File "C:\ComfyUI_windows_portable_nvidia\ComfyUI_windows_portable\python_embeded\Lib\site-packages\torch\_dynamo\eval_frame.py", line 632, in _fn
    return fn(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^
  File "C:\ComfyUI_windows_portable_nvidia\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI-HunyuanVideoWrapper\hyvideo\modules\attention.py", line 23, in sageattn_varlen_func
    return sageattn_varlen(q, k, v, cu_seqlens_q, cu_seqlens_kv, max_seqlen_q, max_seqlen_kv)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "C:\ComfyUI_windows_portable_nvidia\ComfyUI_windows_portable\python_embeded\sageattention\core.py", line 198, in sageattn_varlen
    q_int8, q_scale, k_int8, k_scale, cu_seqlens_q_scale, cu_seqlens_k_scale = per_block_int8_varlen(q, k, cu_seqlens_q, cu_seqlens_k, max_seqlen_q, max_seqlen_k, sm_scale=sm_scale)
                                                                               ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "C:\ComfyUI_windows_portable_nvidia\ComfyUI_windows_portable\python_embeded\sageattention\quant_per_block_varlen.py", line 69, in per_block_int8
    quant_per_block_int8_kernel[grid](
  File "C:\ComfyUI_windows_portable_nvidia\ComfyUI_windows_portable\python_embeded\triton\runtime\jit.py", line 345, in <lambda>
    return lambda *args, **kwargs: self.run(grid=grid, warmup=False, *args, **kwargs)
                                   ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "C:\ComfyUI_windows_portable_nvidia\ComfyUI_windows_portable\python_embeded\triton\runtime\jit.py", line 662, in run
    kernel = self.compile(
             ^^^^^^^^^^^^^
  File "C:\ComfyUI_windows_portable_nvidia\ComfyUI_windows_portable\python_embeded\triton\compiler\compiler.py", line 244, in compile
    key = f"{triton_key()}-{src.hash()}-{backend.hash()}-{options.hash()}-{str(sorted(env_vars.items()))}"
                                         ^^^^^^^^^^^^^^
  File "C:\ComfyUI_windows_portable_nvidia\ComfyUI_windows_portable\python_embeded\triton\backends\nvidia\compiler.py", line 336, in hash
    version = get_ptxas_version()
              ^^^^^^^^^^^^^^^^^^^
  File "C:\ComfyUI_windows_portable_nvidia\ComfyUI_windows_portable\python_embeded\triton\backends\nvidia\compiler.py", line 38, in get_ptxas_version
    version = subprocess.check_output([_path_to_binary("ptxas")[0], "--version"]).decode("utf-8")
                                       ^^^^^^^^^^^^^^^^^^^^^^^^
  File "C:\ComfyUI_windows_portable_nvidia\ComfyUI_windows_portable\python_embeded\triton\backends\nvidia\compiler.py", line 23, in _path_to_binary
    os.path.join(os.environ.get("CUDA_PATH"), "bin", binary),
    ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "<frozen ntpath>", line 108, in join
TypeError: expected str, bytes or os.PathLike object, not NoneType

2024-12-12T11:14:18.593053 - Prompt executed in 85.46 seconds

Attached Workflow

Please make sure that workflow does not contain any sensitive information such as API keys or passwords.

{"last_node_id":54,"last_link_id":52,"nodes":[{"id":16,"type":"DownloadAndLoadHyVideoTextEncoder","pos":[-375,165],"size":[405,178],"flags":{"pinned":true},"order":0,"mode":0,"inputs":[],"outputs":[{"name":"hyvid_text_encoder","type":"HYVIDTEXTENCODER","links":[35]}],"properties":{"Node name for S&R":"DownloadAndLoadHyVideoTextEncoder"},"widgets_values":["Kijai/llava-llama-3-8b-text-encoder-tokenizer","openai/clip-vit-large-patch14","fp16",false,2,"disabled"],"color":"#222","bgcolor":"#000"},{"id":1,"type":"HyVideoModelLoader","pos":[-375,-165],"size":[405,195],"flags":{"pinned":true},"order":1,"mode":0,"inputs":[{"name":"compile_args","type":"COMPILEARGS","link":null,"shape":7},{"name":"block_swap_args","type":"BLOCKSWAPARGS","link":null,"shape":7}],"outputs":[{"name":"model","type":"HYVIDEOMODEL","links":[2],"slot_index":0}],"properties":{"Node name for S&R":"HyVideoModelLoader"},"widgets_values":["hunyuan_video_720_cfgdistill_fp8_e4m3fn.safetensors","bf16","fp8_e4m3fn","main_device","sageattn_varlen"],"color":"#222","bgcolor":"#000"},{"id":47,"type":"Note","pos":[30,-255],"size":[405,60],"flags":{"collapsed":false,"pinned":true},"order":2,"mode":0,"inputs":[],"outputs":[],"title":"STEP 2","properties":{},"widgets_values":["PROMPT"],"color":"#ffe119","bgcolor":"#ffcd05"},{"id":7,"type":"HyVideoVAELoader","pos":[-375,60],"size":[405,82],"flags":{"collapsed":false,"pinned":true},"order":3,"mode":0,"inputs":[{"name":"compile_args","type":"COMPILEARGS","link":null,"shape":7}],"outputs":[{"name":"vae","type":"VAE","links":[6],"slot_index":0}],"properties":{"Node name for S&R":"HyVideoVAELoader"},"widgets_values":["hunyuan_video_vae_bf16.safetensors","fp16"],"color":"#222","bgcolor":"#000"},{"id":48,"type":"Note","pos":[435,-255],"size":[345,58],"flags":{"collapsed":false,"pinned":true},"order":4,"mode":0,"inputs":[],"outputs":[],"title":"STEP 3","properties":{},"widgets_values":["VIDEO GENERATION SETTINGS"],"color":"#3a19ff","bgcolor":"#2605ff"},{"id":52,"type":"Note","pos":[780,-255],"size":[555,58],"flags":{"collapsed":false,"pinned":true},"order":5,"mode":0,"inputs":[],"outputs":[],"title":"STEP 4","properties":{},"widgets_values":["SAVE VIDEO"],"color":"#75ff19","bgcolor":"#61ff05"},{"id":50,"type":"Note","pos":[-375,-255],"size":[405,58],"flags":{"collapsed":false,"pinned":true},"order":6,"mode":0,"inputs":[],"outputs":[],"title":"STEP 1","properties":{},"widgets_values":["LOAD MODELS"],"color":"#ff3a19","bgcolor":"#ff2605"},{"id":53,"type":"Note","pos":[1680,525],"size":[1340,400],"flags":{"collapsed":true},"order":7,"mode":0,"inputs":[],"outputs":[],"title":"by Black Mixture 🤙🏽","properties":{},"widgets_values":["\n\n  ███████████  ████                     █████         ██████   ██████  ███               █████                                 \n ░░███░░░░░███░░███                    ░░███         ░░██████ ██████  ░░░               ░░███                                  \n  ░███    ░███ ░███   ██████    ██████  ░███ █████    ░███░█████░███  ████  █████ █████ ███████   █████ ████ ████████   ██████ \n  ░██████████  ░███  ░░░░░███  ███░░███ ░███░░███     ░███░░███ ░███ ░░███ ░░███ ░░███ ░░░███░   ░░███ ░███ ░░███░░███ ███░░███\n  ░███░░░░░███ ░███   ███████ ░███ ░░░  ░██████░      ░███ ░░░  ░███  ░███  ░░░█████░    ░███     ░███ ░███  ░███ ░░░ ░███████ \n  ░███    ░███ ░███  ███░░███ ░███  ███ ░███░░███     ░███      ░███  ░███   ███░░░███   ░███ ███ ░███ ░███  ░███     ░███░░░  \n  ███████████  █████░░████████░░██████  ████ █████    █████     █████ █████ █████ █████  ░░█████  ░░████████ █████    ░░██████ \n ░░░░░░░░░░░  ░░░░░  ░░░░░░░░  ░░░░░░  ░░░░ ░░░░░    ░░░░░     ░░░░░ ░░░░░ ░░░░░ ░░░░░    ░░░░░    ░░░░░░░░ ░░░░░      ░░░░░░  \n\n\n@blackmixture on IG and YT\n\nhttps://www.instagram.com/blackmixture/\nhttps://www.youtube.com/@BlackMixture/videos\n"],"color":"#194bff","bgcolor":"#0537ff"},{"id":41,"type":"HYDiTTextEncoderLoader","pos":[-375,345],"size":[405,150],"flags":{"pinned":true},"order":8,"mode":0,"inputs":[],"outputs":[{"name":"CLIP","type":"CLIP","links":null},{"name":"T5","type":"T5","links":[],"slot_index":1}],"properties":{"Node name for S&R":"HYDiTTextEncoderLoader"},"widgets_values":["ViT-L-14-TEXT-detail-improved-hiT-GmP-TE-only-HF.safetensors",null,"cpu","default"],"color":"#222","bgcolor":"#000"},{"id":5,"type":"HyVideoDecode","pos":[435,390],"size":[345,150],"flags":{"pinned":true},"order":13,"mode":0,"inputs":[{"name":"vae","type":"VAE","link":6},{"name":"samples","type":"LATENT","link":4}],"outputs":[{"name":"images","type":"IMAGE","links":[42],"slot_index":0}],"properties":{"Node name for S&R":"HyVideoDecode"},"widgets_values":[true,8,256,true],"color":"#222","bgcolor":"#000"},{"id":54,"type":"Note","pos":[1410,525],"size":[267.7664794921875,475.4589538574219],"flags":{"collapsed":true},"order":9,"mode":0,"inputs":[],"outputs":[],"title":"Workflow Details","properties":{},"widgets_values":["Black Mixture Hunyuan Workflow Text→to→Video \n\nVersion: 1\nPublished:12/08/24\nAuthor:Nate Dwarika\nOrg: Black Mixture\n\nThanks for downloading! \n\nNeed help? Check out our YT channel and Discord:\n\nhttps://www.youtube.com/@BlackMixture/videos\n\nhttps://discord.gg/n2N7Hgvn7n\n\nHope you enjoy and have fun with this workflow!\n\n"],"color":"#194bff","bgcolor":"#0537ff"},{"id":3,"type":"HyVideoSampler","pos":[435,-165],"size":[345,525],"flags":{"pinned":true},"order":12,"mode":0,"inputs":[{"name":"model","type":"HYVIDEOMODEL","link":2},{"name":"hyvid_embeds","type":"HYVIDEMBEDS","link":36},{"name":"samples","type":"LATENT","link":null,"shape":7},{"name":"stg_args","type":"STGARGS","link":null,"shape":7}],"outputs":[{"name":"samples","type":"LATENT","links":[4],"slot_index":0}],"properties":{"Node name for S&R":"HyVideoSampler"},"widgets_values":[416,224,77,25,6,9,76060645194937,"randomize",true,1],"color":"#222","bgcolor":"#000"},{"id":34,"type":"VHS_VideoCombine","pos":[780,-165],"size":[555,310],"flags":{"pinned":true},"order":14,"mode":0,"inputs":[{"name":"images","type":"IMAGE","link":42},{"name":"audio","type":"AUDIO","link":null,"shape":7},{"name":"meta_batch","type":"VHS_BatchManager","link":null,"shape":7},{"name":"vae","type":"VAE","link":null,"shape":7}],"outputs":[{"name":"Filenames","type":"VHS_FILENAMES","links":[],"slot_index":0}],"properties":{"Node name for S&R":"VHS_VideoCombine"},"widgets_values":{"frame_rate":25,"loop_count":0,"filename_prefix":"Black Mixture_HunyuanVideo","format":"video/h264-mp4","pix_fmt":"yuv420p","crf":19,"save_metadata":true,"pingpong":false,"save_output":true,"videopreview":{"hidden":false,"paused":false,"params":{"filename":"Black Mixture_HunyuanVideo_00007.mp4","subfolder":"","type":"output","format":"video/h264-mp4","frame_rate":24},"muted":false}},"color":"#222","bgcolor":"#000"},{"id":30,"type":"HyVideoTextEncode","pos":[30,-165],"size":[405,660],"flags":{"pinned":true},"order":11,"mode":0,"inputs":[{"name":"text_encoders","type":"HYVIDTEXTENCODER","link":35},{"name":"custom_prompt_template","type":"PROMPT_TEMPLATE","link":null,"shape":7},{"name":"clip_l","type":"CLIP","link":null,"shape":7}],"outputs":[{"name":"hyvid_embeds","type":"HYVIDEMBEDS","links":[36]}],"properties":{"Node name for S&R":"HyVideoTextEncode"},"widgets_values":["A car races through wet city streets as cops chase close behind. The camera zooms in to track the car.","bad quality video",[false,true]],"color":"#222","bgcolor":"#000"},{"id":51,"type":"Note","pos":[1335,-255],"size":[510,750],"flags":{"collapsed":false},"order":10,"mode":0,"inputs":[],"outputs":[],"title":"IMPORTANT NOTES","properties":{},"widgets_values":["Black Mixture Hunyuan Workflow Text→to→Video\n\n→ Need help? Message me on Patreon: 🔗https://www.patreon.com/c/BlackMixture\n\n→ Discord:\n🔗https://discord.gg/n2N7Hgvn7n\n\n→ Check out our YT channel: \n🔗https://www.youtube.com/@BlackMixture/videos\n\n→ CENTRAL HUB FOR WORKFLOWS/MODELS/UPSCALERS:\n🔗https://www.patreon.com/posts/115824280\n\nHUGE THANKS\n\nHope you enjoy and have fun with this workflow!\n- Nate Dwarika\nThe Black Mixture Team\n\n------------------------------------------------\n\nPrerequisites for this workflow:\n\n1️⃣ Update ComfyUI\nEnsure ComfyUI is updated to the latest version.\n\n2️⃣ Download and Place Model Files\n\n🔹 Transformer and VAE (two files, no autodownload)\n🔗https://huggingface.co/Kijai/HunyuanVideo_comfy/tree/main\n📂 Place Transformer files in: ComfyUI/models/diffusion_models\n\n📂 Place VAE files in: ComfyUI/models/vae\n\n🔹 LLM Text Encoder (autodownloads on first run)\n🔗 https://huggingface.co/Kijai/llava-llama-3-8b-text-encoder-tokenizer\n📂 Files go to: ComfyUI/models/LLM/llava-llama-3-8b-text-encoder-tokenizer\n\n🔹 CLIP Text Encoder (autodownloads on first run)\nFor now, using the original CLIP model:\n🔗 https://huggingface.co/openai/clip-vit-large-patch14\n📂 Place the .safetensors from the weights and all the config files in: ComfyUI/models/clip/clip-vit-large-patch14\n\nHUGE THANKS TO KIJAI FOR THE WRAPPERS, ORIGINAL WORKFLOW, & INSTALL NOTES\n\n------------------------------------------------\n\n💖 Thanks to our Supporters on Patreon! 💖\n\nTH\nOlya Vell\nMike Swiderski\nJohn Zeller\nhoodTRONIK\nmiguel bautista\nElizabeth Strickler\nChristopher Avina\nPascal Edrich\nJorge Morales Gonzalez\nVeiss\nBrian Haley\nFinn Negrello\nFoc eighty two\nRedstar71\nNoah Stump\nJax Lumford\nDK TV\nSathya Bee\nGianluca Amati\nDoug Barreiro\nAkari Lara\nSamuel\nWade Jensen\nTim Überlackner\nFabio Brunello\nMcPixel\nNG88\nDanijel Mesko\nVinicius Fischer\nMarco Linder\nUltimatech CN\nhigh vibes\nAdriana\nAlexander Pereira\nOleksii Kapustin\nAndreas Urra\nJames Reid\nDIGIMANN\nSunny\nRL Smith\nOkechukwu Ikwunze\nPeter\nIvan Ivan\nBrandon J\nMarc Joy\nIan Camarillo\nJames Thompson\nMifune\nRobb Calhoon\nRudy VFX\nCraig Beacham\nKamil Banc\nArachnaas\nDana samsonov\nMorgan NOSIDE\nChris Boyle\nRelatos para escuchar\nMalcolm Jones\nAdam Michell"],"color":"#9d42ff","bgcolor":"#892eff"}],"links":[[2,1,0,3,0,"HYVIDEOMODEL"],[4,3,0,5,1,"LATENT"],[6,7,0,5,0,"VAE"],[35,16,0,30,0,"HYVIDTEXTENCODER"],[36,30,0,3,1,"HYVIDEMBEDS"],[42,5,0,34,0,"IMAGE"]],"groups":[],"config":{},"extra":{"ds":{"scale":0.8264462809917354,"offset":[371.78684365821863,341.5273089133264]},"0246.VERSION":[0,0,4],"ue_links":[]},"version":0.4}

Additional Context

(Please add any additional context or steps to reproduce the error here)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant