Skip to content

RTX 50 series issues #27

@MadMan2k

Description

@MadMan2k

First, confirm

  • I have read the instruction carefully
  • I have searched the existing issues
  • I have updated the extension to the latest version

What happened?

It seems there are some issues with making it work on the RTX 50 series. I am using AUTOMATIC1111 with CUDA 12.8, along with updated PyTorch (2.6.0+cu128) and Torchvision (0.20.0a0+cu128). The --xformers option is not enabled because it is currently incompatible xformers 0.0.29.post2 now works with RTX 50 series. However, I am still unable to get ReActor to work

Steps to reproduce the problem

Use ReActor with RTX 50 series

Sysinfo

Windows 10, RTX 5070ti

Relevant console log

Python 3.10.6 (tags/v3.10.6:9c7b4bd, Aug  1 2022, 21:53:49) [MSC v.1932 64 bit (AMD64)]
Version: v1.10.1
Commit hash: 82a973c04367123ae98bd9abdf80d9eda9b910e2
Installing requirements
CUDA 12.8
Launching Web UI with arguments:
C:\AI\SD_RTX_50\sd.webui-1.10.1-blackwell\system\python\lib\site-packages\timm\models\layers\__init__.py:48: FutureWarning: Importing from timm.models.layers is deprecated, please import via timm.layers
  warnings.warn(f"Importing from {__name__} is deprecated, please import via timm.layers", FutureWarning)
no module 'xformers'. Processing without...
no module 'xformers'. Processing without...
No module 'xformers'. Proceeding without it.
13:51:28 - ReActor - STATUS - Running v0.7.1-b2 on Device: CUDA
Loading weights [6ce0161689] from C:\AI\SD_RTX_50\sd.webui-1.10.1-blackwell\webui\models\Stable-diffusion\v1-5-pruned-emaonly.safetensors
Creating model from config: C:\AI\SD_RTX_50\sd.webui-1.10.1-blackwell\webui\configs\v1-inference.yaml
Running on local URL:  http://127.0.0.1:7860

To create a public link, set `share=True` in `launch()`.
Startup time: 9.4s (prepare environment: 3.9s, import torch: 2.8s, import gradio: 0.6s, setup paths: 0.6s, initialize shared: 0.2s, other imports: 0.3s, load scripts: 0.5s, create ui: 0.3s, gradio launch: 0.2s).
Applying attention optimization: Doggettx... done.
Model loaded in 1.7s (create model: 0.4s, apply weights to model: 1.0s, calculate empty prompt: 0.1s).
13:52:03 - ReActor - STATUS - Working: source face index [0], target face index [0]
13:52:03 - ReActor - STATUS - Analyzing Source Image...
2025-03-29 13:52:03.2913621 [E:onnxruntime:Default, provider_bridge_ort.cc:1548 onnxruntime::TryGetProviderInfo_CUDA] D:\a\_work\1\s\onnxruntime\core\session\provider_bridge_ort.cc:1209 onnxruntime::ProviderLibrary::Get [ONNXRuntimeError] : 1 : FAIL : LoadLibrary failed with error 126 "" when trying to load "C:\AI\SD_RTX_50\sd.webui-1.10.1-blackwell\system\python\lib\site-packages\onnxruntime\capi\onnxruntime_providers_cuda.dll"

*************** EP Error ***************
EP Error D:\a\_work\1\s\onnxruntime\python\onnxruntime_pybind_state.cc:857 onnxruntime::python::CreateExecutionProviderInstance CUDA_PATH is set but CUDA wasnt able to be loaded. Please install the correct version of CUDA andcuDNN as mentioned in the GPU requirements page  (https://onnxruntime.ai/docs/execution-providers/CUDA-ExecutionProvider.html#requirements),  make sure they're in the PATH, and that your GPU is supported.
 when using ['CUDAExecutionProvider']
Falling back to ['CUDAExecutionProvider', 'CPUExecutionProvider'] and retrying.
****************************************
2025-03-29 13:52:03.3504827 [E:onnxruntime:Default, provider_bridge_ort.cc:1548 onnxruntime::TryGetProviderInfo_CUDA] D:\a\_work\1\s\onnxruntime\core\session\provider_bridge_ort.cc:1209 onnxruntime::ProviderLibrary::Get [ONNXRuntimeError] : 1 : FAIL : LoadLibrary failed with error 126 "" when trying to load "C:\AI\SD_RTX_50\sd.webui-1.10.1-blackwell\system\python\lib\site-packages\onnxruntime\capi\onnxruntime_providers_cuda.dll"

*** Error completing request
*** Arguments: ('task(o9s8hnfxky4tlgg)', 0.0, <PIL.Image.Image image mode=RGBA size=827x1000 at 0x2C1E43F9810>, None, '', '', True, True, 0.0, 4, 0.0, 512, 512, True, 'None', 'None', 0, False, 1, False, 1, 0, False, 0.5, 0.2, False, 0.9, 0.15, 0.5, False, False, 384, 768, 4096, 409600, 'Maximize area', 0.1, False, ['Horizontal'], False, ['Deepbooru'], <PIL.Image.Image image mode=RGB size=220x220 at 0x2C1E43F98D0>, True, '0', '0', 'inswapper_128.onnx', 'CodeFormer', 1, True, 'None', 1, 1, 1, 0, 0, 0.5, 'CUDA', False, 0, 'None', '', None, False, False, 0.5, 0, 'tab_single') {}
    Traceback (most recent call last):
      File "C:\AI\SD_RTX_50\sd.webui-1.10.1-blackwell\system\python\lib\site-packages\onnxruntime\capi\onnxruntime_inference_collection.py", line 419, in __init__
        self._create_inference_session(providers, provider_options, disabled_optimizers)
      File "C:\AI\SD_RTX_50\sd.webui-1.10.1-blackwell\system\python\lib\site-packages\onnxruntime\capi\onnxruntime_inference_collection.py", line 483, in _create_inference_session
        sess.initialize_session(providers, provider_options, disabled_optimizers)
    RuntimeError: D:\a\_work\1\s\onnxruntime\python\onnxruntime_pybind_state.cc:857 onnxruntime::python::CreateExecutionProviderInstance CUDA_PATH is set but CUDA wasnt able to be loaded. Please install the correct version of CUDA andcuDNN as mentioned in the GPU requirements page  (https://onnxruntime.ai/docs/execution-providers/CUDA-ExecutionProvider.html#requirements),  make sure they're in the PATH, and that your GPU is supported.


    The above exception was the direct cause of the following exception:

    Traceback (most recent call last):
      File "C:\AI\SD_RTX_50\sd.webui-1.10.1-blackwell\webui\modules\call_queue.py", line 74, in f
        res = list(func(*args, **kwargs))
      File "C:\AI\SD_RTX_50\sd.webui-1.10.1-blackwell\webui\modules\call_queue.py", line 53, in f
        res = func(*args, **kwargs)
      File "C:\AI\SD_RTX_50\sd.webui-1.10.1-blackwell\webui\modules\call_queue.py", line 37, in f
        res = func(*args, **kwargs)
      File "C:\AI\SD_RTX_50\sd.webui-1.10.1-blackwell\webui\modules\postprocessing.py", line 133, in run_postprocessing_webui
        return run_postprocessing(*args, **kwargs)
      File "C:\AI\SD_RTX_50\sd.webui-1.10.1-blackwell\webui\modules\postprocessing.py", line 73, in run_postprocessing
        scripts.scripts_postproc.run(initial_pp, args)
      File "C:\AI\SD_RTX_50\sd.webui-1.10.1-blackwell\webui\modules\scripts_postprocessing.py", line 198, in run
        script.process(single_image, **process_args)
      File "C:\AI\SD_RTX_50\sd.webui-1.10.1-blackwell\webui\extensions\sd-webui-reactor\scripts\reactor_faceswap.py", line 688, in process
        result, output, swapped = swap_face(
      File "C:\AI\SD_RTX_50\sd.webui-1.10.1-blackwell\webui\extensions\sd-webui-reactor\scripts\reactor_swapper.py", line 544, in swap_face
        source_faces = analyze_faces(source_img, det_thresh=detection_options.det_thresh, det_maxnum=detection_options.det_maxnum)
      File "C:\AI\SD_RTX_50\sd.webui-1.10.1-blackwell\webui\extensions\sd-webui-reactor\scripts\reactor_swapper.py", line 302, in analyze_faces
        face_analyser = copy.deepcopy(getAnalysisModel())
      File "C:\AI\SD_RTX_50\sd.webui-1.10.1-blackwell\webui\extensions\sd-webui-reactor\scripts\reactor_swapper.py", line 145, in getAnalysisModel
        ANALYSIS_MODEL = insightface.app.FaceAnalysis(
      File "C:\AI\SD_RTX_50\sd.webui-1.10.1-blackwell\webui\extensions\sd-webui-reactor\scripts\console_log_patch.py", line 48, in patched_faceanalysis_init
        model = model_zoo.get_model(onnx_file, **kwargs)
      File "C:\AI\SD_RTX_50\sd.webui-1.10.1-blackwell\system\python\lib\site-packages\insightface\model_zoo\model_zoo.py", line 96, in get_model
        model = router.get_model(providers=providers, provider_options=provider_options)
      File "C:\AI\SD_RTX_50\sd.webui-1.10.1-blackwell\webui\extensions\sd-webui-reactor\scripts\console_log_patch.py", line 21, in patched_get_model
        session = PickableInferenceSession(self.onnx_file, **kwargs)
      File "C:\AI\SD_RTX_50\sd.webui-1.10.1-blackwell\system\python\lib\site-packages\insightface\model_zoo\model_zoo.py", line 25, in __init__
        super().__init__(model_path, **kwargs)
      File "C:\AI\SD_RTX_50\sd.webui-1.10.1-blackwell\system\python\lib\site-packages\onnxruntime\capi\onnxruntime_inference_collection.py", line 432, in __init__
        raise fallback_error from e
      File "C:\AI\SD_RTX_50\sd.webui-1.10.1-blackwell\system\python\lib\site-packages\onnxruntime\capi\onnxruntime_inference_collection.py", line 427, in __init__
        self._create_inference_session(self._fallback_providers, None)
      File "C:\AI\SD_RTX_50\sd.webui-1.10.1-blackwell\system\python\lib\site-packages\onnxruntime\capi\onnxruntime_inference_collection.py", line 483, in _create_inference_session
        sess.initialize_session(providers, provider_options, disabled_optimizers)
    RuntimeError: D:\a\_work\1\s\onnxruntime\python\onnxruntime_pybind_state.cc:857 onnxruntime::python::CreateExecutionProviderInstance CUDA_PATH is set but CUDA wasnt able to be loaded. Please install the correct version of CUDA andcuDNN as mentioned in the GPU requirements page  (https://onnxruntime.ai/docs/execution-providers/CUDA-ExecutionProvider.html#requirements),  make sure they're in the PATH, and that your GPU is supported.

Additional information

No response

Metadata

Metadata

Assignees

No one assigned

    Labels

    bugSomething isn't workingnew

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions