Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Performance] The 16-bit quantization QDQ model cannot be accelerated by CUDA #21478

Open
duanshengliu opened this issue Jul 24, 2024 · 2 comments
Labels
ep:CUDA issues related to the CUDA execution provider performance issues related to performance regressions quantization issues related to quantization stale issues that have not been addressed in a while; categorized by a bot

Comments

@duanshengliu
Copy link
Contributor

duanshengliu commented Jul 24, 2024

Describe the issue

My GPU is V100 CUDA Version: 12.0 or 11.8
CPU is Intel(R) Xeon(R) Gold 6271C CPU @ 2.60GHz

I tested the performance of A8W8 and A16W16 quantization models on CPU and CUDA respectively. The performance of A16W16 quantization model on CUDA is even worse than that of CPU.

Summary:

Total Inference Time(s)(repeat=100) A8W8 A16W16
CPUExecutionProvider 6.698 s ✔️ 30.961 s ✔️
CUDAExecutionProvider 3.870 s ✔️ 42.365 s

Moreover, The A16W8 or A8W16 quantization models also have the similar issues.

To reproduce

This issue can be reproduced by using the relevant files in performance.zip. The reproduction commands and results are as follows,

cd path/to/performance
python run.py

then you will receive the following results:

mobilenetv2_a8w8.onnx ['CPUExecutionProvider'] Total Inference Time: 6.698 seconds
mobilenetv2_a8w8.onnx ['CUDAExecutionProvider'] Total Inference Time: 3.870 seconds
================================================================================
mobilenetv2_a16w16.onnx ['CPUExecutionProvider'] Total Inference Time: 30.961 seconds
mobilenetv2_a16w16.onnx ['CUDAExecutionProvider'] Total Inference Time: 42.365 seconds
================================================================================

Urgency

Urgent

Platform

Linux

OS Version

Ubuntu 22.04

ONNX Runtime Installation

Released Package

ONNX Runtime Version or Commit ID

1.18.1

ONNX Runtime API

Python

Architecture

X64

Execution Provider

CUDA

Execution Provider Library Version

CUDA12/CUDA11.8

Model File

No response

Is this a quantized model?

Yes

@github-actions github-actions bot added ep:CUDA issues related to the CUDA execution provider quantization issues related to quantization labels Jul 24, 2024
@sophies927 sophies927 added the performance issues related to performance regressions label Jul 25, 2024
@duanshengliu
Copy link
Contributor Author

@snnn @skottmckay @tianleiwu Could you take a look at this issue

Copy link
Contributor

This issue has been automatically marked as stale due to inactivity and will be closed in 30 days if no further activity occurs. If further support is needed, please provide an update and/or more details.

@github-actions github-actions bot added the stale issues that have not been addressed in a while; categorized by a bot label Aug 26, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
ep:CUDA issues related to the CUDA execution provider performance issues related to performance regressions quantization issues related to quantization stale issues that have not been addressed in a while; categorized by a bot
Projects
None yet
Development

No branches or pull requests

2 participants