Skip to content

CUDA_PATH is set but CUDA wasnt able to be loaded #21272

Open

Description

Describe the issue

Im using A1111 and an extension to mask the background. When i try to run the generation to get the mask, i run into some issues. Since there is no install instructions anywhere, from the extensions to onnx. So all i got to go by is the error messages. It says that TensorRT was missing so i installed it. Still says its missing so maybe just a bad error message.

It also says that i did not have CUDA and cuDNN. So i installed those. Have made sure all 3 are on the system variables, only cuda have its own variable, the other two are added to the basic "PATH", since nowhere can i find if it needs its own like CUDA with its "CUDA_PATH" variable. And here's where i am now, with no solution in sight. CUDA_PATH is set but CUDA wasnt able to be loaded CUDA Toolkit is v11.8, cuDNN is v8.9.2.26, and TensorRT is v10.0, versions that you have listed as compatible.

Do i also need the python module TensorRT, if so what version? This was not installed with A1111 or the extension, or their dependencies.

System can find my CUDA install.

nvcc: NVIDIA (R) Cuda compiler driver
Copyright (c) 2005-2022 NVIDIA Corporation
Built on Wed_Sep_21_10:41:10_Pacific_Daylight_Time_2022
Cuda compilation tools, release 11.8, V11.8.89
Build cuda_11.8.r11.8/compiler.31833905_0
2024-07-06 16:53:29.9923561 [E:onnxruntime:Default, provider_bridge_ort.cc:1534 onnxruntime::TryGetProviderInfo_TensorRT] D:\a\_work\1\s\onnxruntime\core\session\provider_bridge_ort.cc:1209 onnxruntime::ProviderLibrary::Get [ONNXRuntimeError] : 1 : FAIL : LoadLibrary failed with error 126 "" when trying to load "C:\Stable diffusion\venv\lib\site-packages\onnxruntime\capi\onnxruntime_providers_tensorrt.dll"

*************** EP Error ***************
EP Error D:\a\_work\1\s\onnxruntime\python\onnxruntime_pybind_state.cc:456 onnxruntime::python::RegisterTensorRTPluginsAsCustomOps Please install TensorRT libraries as mentioned in the GPU requirements page, make sure they're in the PATH or LD_LIBRARY_PATH, and that your GPU is supported.
 when using ['TensorrtExecutionProvider', 'CUDAExecutionProvider', 'CPUExecutionProvider']
Falling back to ['CUDAExecutionProvider', 'CPUExecutionProvider'] and retrying.
****************************************
2024-07-06 16:53:30.0643700 [E:onnxruntime:Default, provider_bridge_ort.cc:1548 onnxruntime::TryGetProviderInfo_CUDA] D:\a\_work\1\s\onnxruntime\core\session\provider_bridge_ort.cc:1209 onnxruntime::ProviderLibrary::Get [ONNXRuntimeError] : 1 : FAIL : LoadLibrary failed with error 126 "" when trying to load "C:\Stable diffusion\venv\lib\site-packages\onnxruntime\capi\onnxruntime_providers_cuda.dll"

*** Error completing request
*** Arguments: ('task(7d1w0f8aymy9rid)', 0, <PIL.Image.Image image mode=RGB size=1280x1708 at 0x2721DB8B880>, None, '', '', True, 0, 4, 512, 512, True, 'None', 'None', 0, False, 1, False, 1, 0, False, 0.9, 0.15, 0.5, False, False, 384, 768, 4096, 409600, 'Maximize area', 0.1, False, ['Deepbooru'], False, ['Horizontal'], False, 0.5, 0.2, True, 'isnet-anime', True, False, 240, 10, 10) {}
    Traceback (most recent call last):
      File "C:\Stable diffusion\venv\lib\site-packages\onnxruntime\capi\onnxruntime_inference_collection.py", line 419, in __init__
        self._create_inference_session(providers, provider_options, disabled_optimizers)
      File "C:\Stable diffusion\venv\lib\site-packages\onnxruntime\capi\onnxruntime_inference_collection.py", line 469, in _create_inference_session
        self._register_ep_custom_ops(session_options, providers, provider_options, available_providers)
      File "C:\Stable diffusion\venv\lib\site-packages\onnxruntime\capi\onnxruntime_inference_collection.py", line 516, in _register_ep_custom_ops
        C.register_tensorrt_plugins_as_custom_ops(session_options, provider_options[i])
    RuntimeError: D:\a\_work\1\s\onnxruntime\python\onnxruntime_pybind_state.cc:456 onnxruntime::python::RegisterTensorRTPluginsAsCustomOps Please install TensorRT libraries as mentioned in the GPU requirements page, make sure they're in the PATH or LD_LIBRARY_PATH, and that your GPU is supported.


    The above exception was the direct cause of the following exception:

    Traceback (most recent call last):
      File "C:\Stable diffusion\modules\call_queue.py", line 57, in f
        res = list(func(*args, **kwargs))
      File "C:\Stable diffusion\modules\call_queue.py", line 36, in f
        res = func(*args, **kwargs)
      File "C:\Stable diffusion\modules\postprocessing.py", line 132, in run_postprocessing_webui
        return run_postprocessing(*args, **kwargs)
      File "C:\Stable diffusion\modules\postprocessing.py", line 73, in run_postprocessing
        scripts.scripts_postproc.run(initial_pp, args)
      File "C:\Stable diffusion\modules\scripts_postprocessing.py", line 196, in run
        script.process(single_image, **process_args)
      File "C:\Stable diffusion\extensions\stable-diffusion-webui-rembg\scripts\postprocessing_rembg.py", line 66, in process
        session=rembg.new_session(model),
      File "C:\Stable diffusion\venv\lib\site-packages\rembg\session_factory.py", line 26, in new_session
        return session_class(model_name, sess_opts, providers, *args, **kwargs)
      File "C:\Stable diffusion\venv\lib\site-packages\rembg\sessions\base.py", line 31, in __init__
        self.inner_session = ort.InferenceSession(
      File "C:\Stable diffusion\venv\lib\site-packages\onnxruntime\capi\onnxruntime_inference_collection.py", line 432, in __init__
        raise fallback_error from e
      File "C:\Stable diffusion\venv\lib\site-packages\onnxruntime\capi\onnxruntime_inference_collection.py", line 427, in __init__
        self._create_inference_session(self._fallback_providers, None)
      File "C:\Stable diffusion\venv\lib\site-packages\onnxruntime\capi\onnxruntime_inference_collection.py", line 483, in _create_inference_session
        sess.initialize_session(providers, provider_options, disabled_optimizers)
    RuntimeError: D:\a\_work\1\s\onnxruntime\python\onnxruntime_pybind_state.cc:857 onnxruntime::python::CreateExecutionProviderInstance CUDA_PATH is set but CUDA wasnt able to be loaded. Please install the correct version of CUDA andcuDNN as mentioned in the GPU requirements page  (https://onnxruntime.ai/docs/execution-providers/CUDA-ExecutionProvider.html#requirements),  make sure they're in the PATH, and that your GPU is supported.

To reproduce

I got no clue. But i guess, install A1111, install "stable-diffusion-webui-rembg" extension, use it to try and generate a mask to select the background.

Urgency

Not that urgent. A1111, and other AI projects run just fine otherwise.

Platform

Windows

OS Version

win10

ONNX Runtime Installation

Other / Unknown

ONNX Runtime Version or Commit ID

onnx-1.15.0

ONNX Runtime API

Python

Architecture

X64

Execution Provider

Other / Unknown

Execution Provider Library Version

No response

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Metadata

Assignees

No one assigned

    Labels

    ep:CUDAissues related to the CUDA execution providerep:TensorRTissues related to TensorRT execution providerstaleissues that have not been addressed in a while; categorized by a bot

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions