Skip to content

Error when using onnx models with GPU (CUDA 10) #4656

Closed
@louistrouche

Description

@louistrouche

System information

  • Microsoft.ML.OnnxTransformer : 1.4.0

  • Microsoft.ML : 1.4.0

  • OnnxRuntime.Gpu (test) : 1.0.1

  • CUDA/cuDNN version: : CUDA 10.0.130, cuDNN 7.6.2.

  • .NET Core 2.0

Issue

I'm currently trying to use an onnx model with a GPU and I get the following exception:

System.EntryPointNotFoundException: Unable to find an entry point named 'OrtSessionOptionsAppendExecutionProvider_CUDA' in DLL 'onnxruntime'.

This exception appear when I use the ApplyOnnxModel method :
var pipeline =_mlContext.Transforms.ApplyOnnxModel( gpuDeviceId: gpuDeviceId, modelFile: onnxModelFilePath);

I have tried with and without installing the NuGet package Microsoft.ML.OnnxRuntime.Gpu (1.0.1) and I get the same exception. I have no problem using my onnx model when I use the OnnxRuntime.Gpu package directly.

It seems like even when I install the OnnxRuntime.Gpu package, it's the CPU version that is used. I would like to know if there is a way to make it work without having to build a local ML.NET OnnxTransformer package from source which depends on the OnnxRuntime.Gpu package.

Source code / logs

error_with_onnx_gpu

Metadata

Metadata

Assignees

No one assigned

    Labels

    P1Priority of the issue for triage purpose: Needs to be fixed soon.

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions