Skip to content

Different Output when Inference on CPU / GPU in some case #4488

Closed

Description

Describe the bug
The output of onnx model with GPU seems to different from that with CPU in some case(in my case, segmentation! ).

Urgency
ASAP

System information

  • OS Platform and Distribution (e.g., Linux Ubuntu 16.04):Windows10 64bit
  • ONNX Runtime installed from (source or binary):binary (by pip install onnxruntime-gpu)
  • ONNX Runtime version:1.3.0
  • Python version:3.7.2
  • Visual Studio version (if applicable):2019
  • GCC/Compiler version (if compiling from source):
  • CUDA/cuDNN version:CUDA 10.1 cuDNN 7.6.5
  • GPU model and memory:1080Ti

To Reproduce
I trained segmentation model in keras, and using keras2onnx, convert keras model to onnx model.
But there seems to be non-negligible difference outputs of these models.
Here is an example. Dog Segmentation.

onnx_segmentation_gpu

onnx_segmentation_cpu

The output of onnxruntime on CPU is almost same as that of keras, but Output of onnxruntime on GPU seems to be different !!!

By the way, for classification model , the output of onnxruntime on GPU is almost same as that of keras...

onnx_classification_gpu

And here is source code & weights & image.

Did I have mistake when convert keras model , or when inference on GPU ?

Thank you.

Expected behavior
the outputs are almost same in keras / onnxruntime on CPU / onnxruntime on GPU

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Metadata

Assignees

Labels

ep:CUDAissues related to the CUDA execution provider

Type

No type

Projects

No projects

Milestone

No milestone

Relationships

None yet

Development

No branches or pull requests

Issue actions