-
Notifications
You must be signed in to change notification settings - Fork 2.9k
Issues: microsoft/onnxruntime
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Author
Label
Projects
Milestones
Assignee
Sort
Issues list
CUDA Error cudaErrorUnsupportedPtxVersion with NVIDIA H800 (Compute Capability 9.0) on ONNXRuntime GPU 1.19.2
ep:CUDA
issues related to the CUDA execution provider
#22212
opened Sep 25, 2024 by
xh-liu-tech
[Documentation] Cuda version for default onnxruntime-gpu is wrong
documentation
improvements or additions to documentation; typically submitted using template
ep:CUDA
issues related to the CUDA execution provider
#22178
opened Sep 23, 2024 by
mmeendez8
[Question or BUG] ONNX Runtime CUDA Sessions in Unity Produce Empty Outputs When Running Multiple Models Sequentially on a Single Graphic Card
api:CSharp
issues related to the C# API
ep:CUDA
issues related to the CUDA execution provider
#22146
opened Sep 19, 2024 by
abysslover
onnxruntime-gpu(1.18.0) can not be install
ep:CUDA
issues related to the CUDA execution provider
#22028
opened Sep 9, 2024 by
Jalen-Zhong
[Inference & Training] My Onnxruntime isnt detecting cuda even after all paths are perfectly given with compatible softwares
ep:CUDA
issues related to the CUDA execution provider
training
issues related to ONNX Runtime training; typically submitted using template
#22016
opened Sep 6, 2024 by
tamannashah18
CUDA does not load on Windows
ep:CUDA
issues related to the CUDA execution provider
platform:windows
issues related to the Windows platform
#22000
opened Sep 5, 2024 by
koush
[Performance] CUDAExecutionProvider without RoiAlign (opset 16 version)
ep:CUDA
issues related to the CUDA execution provider
performance
issues related to performance regressions
#21990
opened Sep 5, 2024 by
YuriGao
[CUDA][Performance] Inference time greatly variates during session run
ep:CUDA
issues related to the CUDA execution provider
performance
issues related to performance regressions
#21966
opened Sep 3, 2024 by
roxanacincan
Segfault when using IO binding to CUDA tensor with CPU execution provider
ep:CUDA
issues related to the CUDA execution provider
stale
issues that have not been addressed in a while; categorized by a bot
#21865
opened Aug 26, 2024 by
adamreeve
Different outputs when run on CPU vs GPU (CUDA)
ep:CUDA
issues related to the CUDA execution provider
model:transformer
issues related to a transformer model: BERT, GPT2, Hugging Face, Longformer, T5, etc.
stale
issues that have not been addressed in a while; categorized by a bot
#21859
opened Aug 26, 2024 by
lucian-cap
ORTModelForSeq2SeqLM.from_pretrained can not use provider=['CUDAExecutionProvider','CPUExecutionProvider']
ep:CUDA
issues related to the CUDA execution provider
stale
issues that have not been addressed in a while; categorized by a bot
#21733
opened Aug 14, 2024 by
EASTERNTIGER
[Build] fail to build build issues; typically submitted using template
ep:CUDA
issues related to the CUDA execution provider
platform:windows
issues related to the Windows platform
rel-1.19.0
vs CUDA 12.6 on Windows
build
#21676
opened Aug 8, 2024 by
mc-nv
[cuda ep] Squeeze node fails when axes is not provided
converter:dynamo
issues related supporting the PyTorch Dynamo exporter
ep:CUDA
issues related to the CUDA execution provider
#21661
opened Aug 7, 2024 by
justinchuby
[Performance]
ep:CUDA
issues related to the CUDA execution provider
performance
issues related to performance regressions
stale
issues that have not been addressed in a while; categorized by a bot
#21654
opened Aug 7, 2024 by
eduardatmadenn
Java GPU dependency of ONNX Runtime version 1.18 only support CUDA 12?
api:Java
issues related to the Java API
ep:CUDA
issues related to the CUDA execution provider
stale
issues that have not been addressed in a while; categorized by a bot
#21651
opened Aug 7, 2024 by
dongfeng3692
CUDA_PATH is set but CUDA wasnt able to be loaded
ep:CUDA
issues related to the CUDA execution provider
stale
issues that have not been addressed in a while; categorized by a bot
#21527
opened Jul 27, 2024 by
Noor-Nizar
Onnxruntime LoadLibrary failed with error 126
ep:CUDA
issues related to the CUDA execution provider
stale
issues that have not been addressed in a while; categorized by a bot
#21501
opened Jul 25, 2024 by
Sumphy-ai
[CUDA, DML] MatMul does not properly handle matrices with inner dim == 0
core runtime
issues related to core runtime
ep:CUDA
issues related to the CUDA execution provider
ep:DML
issues related to the DirectML execution provider
platform:windows
issues related to the Windows platform
stale
issues that have not been addressed in a while; categorized by a bot
#21483
opened Jul 24, 2024 by
yuslepukhin
[Performance] The 16-bit quantization QDQ model cannot be accelerated by CUDA
ep:CUDA
issues related to the CUDA execution provider
performance
issues related to performance regressions
quantization
issues related to quantization
stale
issues that have not been addressed in a while; categorized by a bot
#21478
opened Jul 24, 2024 by
duanshengliu
onnxruntime.InferenceSession.run sometimes get stuck, sometimes not
ep:CUDA
issues related to the CUDA execution provider
stale
issues that have not been addressed in a while; categorized by a bot
#21418
opened Jul 19, 2024 by
quarrying
Multi-threaded GPU inferencing failing with whisper-small: Non-zero status code returned while running DecoderMaskedMultiHeadAttention node
api:Java
issues related to the Java API
ep:CUDA
issues related to the CUDA execution provider
#21413
opened Jul 19, 2024 by
david-sitsky
[Feature Request] Mark as negative tests for minimal CUDA build
ep:CUDA
issues related to the CUDA execution provider
feature request
request for unsupported feature or enhancement
#21394
opened Jul 17, 2024 by
poweiw
[Feature Request] Request grid_sample 5D support 🌟
ep:CUDA
issues related to the CUDA execution provider
feature request
request for unsupported feature or enhancement
#21382
opened Jul 17, 2024 by
juntaosun
ONNX Runtime 1.18.1 CUDA 12.4 cuDNN 9.2 breaks inference with repeated inputs when enable_mem_reuse is enabled
api
issues related to all other APIs: C, C++, Python, etc.
ep:CUDA
issues related to the CUDA execution provider
model:transformer
issues related to a transformer model: BERT, GPT2, Hugging Face, Longformer, T5, etc.
platform:windows
issues related to the Windows platform
stale
issues that have not been addressed in a while; categorized by a bot
#21349
opened Jul 14, 2024 by
SystemPanic
Previous Next
ProTip!
Exclude everything labeled
bug
with -label:bug.