Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

HyVideoSampler [WinError 5] 拒绝访问。 #50

Open
liudq80 opened this issue Dec 6, 2024 · 2 comments
Open

HyVideoSampler [WinError 5] 拒绝访问。 #50

liudq80 opened this issue Dec 6, 2024 · 2 comments

Comments

@liudq80
Copy link

liudq80 commented Dec 6, 2024

HyVideoSampler [WinError 5] 拒绝访问。

Additional Context

(Please add any additional context or steps to reproduce the error here)
278d4e1e-54ce-404e-a429-4bed1ddc3426
754e79c9-ab29-4b1a-9146-63c45b13e054

@CapoFortuna
Copy link

CapoFortuna commented Dec 7, 2024

Microsoft Windows [Versione 10.0.22631.4460]
(c) Microsoft Corporation. Tutti i diritti riservati.

C:\Users\vaffa>cd C:\ComfyUI

C:\ComfyUI>venv\Scripts\activate & python main.py --listen --fast
[START] Security scan
[DONE] Security scan
## ComfyUI-Manager: installing dependencies done.
** ComfyUI startup time: 2024-12-07 09:48:53.989190
** Platform: Windows
** Python version: 3.12.7 (tags/v3.12.7:0b05ead, Oct  1 2024, 03:06:41) [MSC v.1941 64 bit (AMD64)]
** Python executable: C:\ComfyUI\venv\Scripts\python.exe
** ComfyUI Path: C:\ComfyUI
** Log path: C:\ComfyUI\comfyui.log

Prestartup times for custom nodes:
   2.4 seconds: C:\ComfyUI\custom_nodes\ComfyUI-Manager

Total VRAM 24576 MB, total RAM 32607 MB
pytorch version: 2.5.1+cu124
Set vram state to: NORMAL_VRAM
Device: cuda:0 NVIDIA GeForce RTX 3090 : cudaMallocAsync
Using pytorch cross attention
[Prompt Server] web root: C:\ComfyUI\web
Total VRAM 24576 MB, total RAM 32607 MB
pytorch version: 2.5.1+cu124
Set vram state to: NORMAL_VRAM
Device: cuda:0 NVIDIA GeForce RTX 3090 : cudaMallocAsync
### Loading: ComfyUI-Manager (V2.55.4)
### ComfyUI Version: v0.3.7-4-g93477f8 | Released on '2024-12-06'
[ComfyUI-Manager] default cache updated: https://raw.githubusercontent.com/ltdrdata/ComfyUI-Manager/main/model-list.json
[ComfyUI-Manager] default cache updated: https://raw.githubusercontent.com/ltdrdata/ComfyUI-Manager/main/alter-list.json
[ComfyUI-Manager] default cache updated: https://raw.githubusercontent.com/ltdrdata/ComfyUI-Manager/main/github-stats.json
[ComfyUI-Manager] default cache updated: https://raw.githubusercontent.com/ltdrdata/ComfyUI-Manager/main/custom-node-list.json
[ComfyUI-Manager] default cache updated: https://raw.githubusercontent.com/ltdrdata/ComfyUI-Manager/main/extension-node-map.json

Import times for custom nodes:
   0.0 seconds: C:\ComfyUI\custom_nodes\websocket_image_save.py
   0.0 seconds: C:\ComfyUI\custom_nodes\ComfyUI-Custom-Scripts
   0.1 seconds: C:\ComfyUI\custom_nodes\ComfyUI-KJNodes
   0.3 seconds: C:\ComfyUI\custom_nodes\ComfyUI-Manager
   0.4 seconds: C:\ComfyUI\custom_nodes\ComfyUI-HunyuanVideoWrapper
   0.6 seconds: C:\ComfyUI\custom_nodes\ComfyUI-VideoHelperSuite

Starting server

To see the GUI go to: http://0.0.0.0:8188
To see the GUI go to: http://[::]:8188
FETCH DATA from: C:\ComfyUI\custom_nodes\ComfyUI-Manager\extension-node-map.json [DONE]
got prompt
WARNING: PlaySound.IS_CHANGED() missing 1 required positional argument: 'self'
The config attributes {'mid_block_causal_attn': True} were passed to AutoencoderKLCausal3D, but are not expected and will be ignored. Please verify your config.json configuration file.
Loading text encoder model (clipL) from: C:\ComfyUI\models\clip\clip-vit-large-patch14
Text encoder to dtype: torch.float16
Loading tokenizer (clipL) from: C:\ComfyUI\models\clip\clip-vit-large-patch14
Loading text encoder model (llm) from: C:\ComfyUI\models\LLM\llava-llama-3-8b-text-encoder-tokenizer
Loading checkpoint shards: 100%|█████████████████████████████████████████████████████████| 4/4 [00:19<00:00,  4.88s/it]
Text encoder to dtype: torch.float16
Loading tokenizer (llm) from: C:\ComfyUI\models\LLM\llava-llama-3-8b-text-encoder-tokenizer
prompt attention_mask:  torch.Size([1, 161])
prompt attention_mask:  torch.Size([1, 77])
Using accelerate to load and assign model weights to device...
Scheduler config: FrozenDict({'num_train_timesteps': 1000, 'shift': 9.0, 'reverse': True, 'solver': 'euler', 'n_tokens': None, '_use_default_values': ['num_train_timesteps', 'n_tokens']})
Input (height, width, video_length) = (544, 960, 97)
Sampling 97 frames in 25 latents at 960x544 with 30 inference steps
  0%|                                                                                           | 0/30 [00:01<?, ?it/s]
!!! Exception during processing !!! [WinError 5] Accesso negato
Traceback (most recent call last):
  File "C:\ComfyUI\execution.py", line 324, in execute
    output_data, output_ui, has_subgraph = get_output_data(obj, input_data_all, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb)
                                           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "C:\ComfyUI\execution.py", line 199, in get_output_data
    return_values = _map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb)
                    ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "C:\ComfyUI\execution.py", line 170, in _map_node_over_list
    process_inputs(input_dict, i)
  File "C:\ComfyUI\execution.py", line 159, in process_inputs
    results.append(getattr(obj, func)(**inputs))
                   ^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "C:\ComfyUI\custom_nodes\ComfyUI-HunyuanVideoWrapper\nodes.py", line 737, in process
    out_latents = model["pipe"](
                  ^^^^^^^^^^^^^^
  File "C:\ComfyUI\venv\Lib\site-packages\torch\utils\_contextlib.py", line 116, in decorate_context
    return func(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^
  File "C:\ComfyUI\custom_nodes\ComfyUI-HunyuanVideoWrapper\hyvideo\diffusion\pipelines\pipeline_hunyuan_video.py", line 571, in __call__
    noise_pred = self.transformer(  # For an input image (129, 192, 336) (1, 256, 256)
                 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "C:\ComfyUI\venv\Lib\site-packages\torch\nn\modules\module.py", line 1736, in _wrapped_call_impl
    return self._call_impl(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "C:\ComfyUI\venv\Lib\site-packages\torch\nn\modules\module.py", line 1747, in _call_impl
    return forward_call(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "C:\ComfyUI\custom_nodes\ComfyUI-HunyuanVideoWrapper\hyvideo\modules\models.py", line 735, in forward
    img, txt = block(*double_block_args)
               ^^^^^^^^^^^^^^^^^^^^^^^^^
  File "C:\ComfyUI\venv\Lib\site-packages\torch\nn\modules\module.py", line 1736, in _wrapped_call_impl
    return self._call_impl(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "C:\ComfyUI\venv\Lib\site-packages\torch\nn\modules\module.py", line 1747, in _call_impl
    return forward_call(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "C:\ComfyUI\custom_nodes\ComfyUI-HunyuanVideoWrapper\hyvideo\modules\models.py", line 205, in forward
    attn = attention(
           ^^^^^^^^^^
  File "C:\ComfyUI\custom_nodes\ComfyUI-HunyuanVideoWrapper\hyvideo\modules\attention.py", line 162, in attention
    x = sageattn_varlen_func(
        ^^^^^^^^^^^^^^^^^^^^^
  File "C:\ComfyUI\venv\Lib\site-packages\torch\_dynamo\eval_frame.py", line 632, in _fn
    return fn(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^
  File "C:\ComfyUI\custom_nodes\ComfyUI-HunyuanVideoWrapper\hyvideo\modules\attention.py", line 23, in sageattn_varlen_func
    return sageattn_varlen(q, k, v, cu_seqlens_q, cu_seqlens_kv, max_seqlen_q, max_seqlen_kv)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "C:\ComfyUI\venv\Lib\site-packages\sageattention\core.py", line 340, in sageattn_varlen
    q_int8, q_scale, k_int8, k_scale, cu_seqlens_q_scale, cu_seqlens_k_scale = per_block_int8_varlen_triton(q, k, cu_seqlens_q, cu_seqlens_k, max_seqlen_q, max_seqlen_k, sm_scale=sm_scale)
                                                                               ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "C:\ComfyUI\venv\Lib\site-packages\sageattention\triton\quant_per_block_varlen.py", line 85, in per_block_int8
    quant_per_block_int8_kernel[grid](
  File "C:\ComfyUI\venv\Lib\site-packages\triton\runtime\jit.py", line 345, in <lambda>
    return lambda *args, **kwargs: self.run(grid=grid, warmup=False, *args, **kwargs)
                                   ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "C:\ComfyUI\venv\Lib\site-packages\triton\runtime\jit.py", line 607, in run
    device = driver.active.get_current_device()
             ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "C:\ComfyUI\venv\Lib\site-packages\triton\runtime\driver.py", line 23, in __getattr__
    self._initialize_obj()
  File "C:\ComfyUI\venv\Lib\site-packages\triton\runtime\driver.py", line 20, in _initialize_obj
    self._obj = self._init_fn()
                ^^^^^^^^^^^^^^^
  File "C:\ComfyUI\venv\Lib\site-packages\triton\runtime\driver.py", line 9, in _create_driver
    return actives[0]()
           ^^^^^^^^^^^^
  File "C:\ComfyUI\venv\Lib\site-packages\triton\backends\nvidia\driver.py", line 414, in __init__
    self.utils = CudaUtils()  # TODO: make static
                 ^^^^^^^^^^^
  File "C:\ComfyUI\venv\Lib\site-packages\triton\backends\nvidia\driver.py", line 92, in __init__
    mod = compile_module_from_src(Path(os.path.join(dirname, "driver.c")).read_text(), "cuda_utils")
          ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "C:\ComfyUI\venv\Lib\site-packages\triton\backends\nvidia\driver.py", line 69, in compile_module_from_src
    so = _build(name, src_path, tmpdir, library_dirs(), include_dir, libraries)
         ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "C:\ComfyUI\venv\Lib\site-packages\triton\runtime\build.py", line 71, in _build
    ret = subprocess.check_call(cc_cmd)
          ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "C:\Users\vaffa\AppData\Local\Programs\Python\Python312\Lib\subprocess.py", line 408, in check_call
    retcode = call(*popenargs, **kwargs)
              ^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "C:\Users\vaffa\AppData\Local\Programs\Python\Python312\Lib\subprocess.py", line 389, in call
    with Popen(*popenargs, **kwargs) as p:
         ^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "C:\Users\vaffa\AppData\Local\Programs\Python\Python312\Lib\subprocess.py", line 1026, in __init__
    self._execute_child(args, executable, preexec_fn, close_fds,
  File "C:\Users\vaffa\AppData\Local\Programs\Python\Python312\Lib\subprocess.py", line 1538, in _execute_child
    hp, ht, pid, tid = _winapi.CreateProcess(executable, args,
                       ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
PermissionError: [WinError 5] Accesso negato

Prompt executed in 56.15 seconds

I think i'm getting same error, did you found the problem?

@GWLight
Copy link

GWLight commented Dec 9, 2024

I think i'm getting same error, did you found the problem?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants