Description
openedon Jul 11, 2020
Describe the bug
The output of onnx model with GPU seems to different from that with CPU in some case(in my case, segmentation! ).
Urgency
ASAP
System information
- OS Platform and Distribution (e.g., Linux Ubuntu 16.04):Windows10 64bit
- ONNX Runtime installed from (source or binary):binary (by pip install onnxruntime-gpu)
- ONNX Runtime version:1.3.0
- Python version:3.7.2
- Visual Studio version (if applicable):2019
- GCC/Compiler version (if compiling from source):
- CUDA/cuDNN version:CUDA 10.1 cuDNN 7.6.5
- GPU model and memory:1080Ti
To Reproduce
I trained segmentation model in keras, and using keras2onnx, convert keras model to onnx model.
But there seems to be non-negligible difference outputs of these models.
Here is an example. Dog Segmentation.
The output of onnxruntime on CPU is almost same as that of keras, but Output of onnxruntime on GPU seems to be different !!!
By the way, for classification model , the output of onnxruntime on GPU is almost same as that of keras...
And here is source code & weights & image.
Did I have mistake when convert keras model , or when inference on GPU ?
Thank you.
Expected behavior
the outputs are almost same in keras / onnxruntime on CPU / onnxruntime on GPU