Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Error in quantize vicuna-7b model from fp16 to int8 #20867

Open
JackWeiw opened this issue May 30, 2024 · 5 comments
Open

Error in quantize vicuna-7b model from fp16 to int8 #20867

JackWeiw opened this issue May 30, 2024 · 5 comments
Labels
ep:CUDA issues related to the CUDA execution provider quantization issues related to quantization stale issues that have not been addressed in a while; categorized by a bot

Comments

@JackWeiw
Copy link

Describe the issue

use shape_inference.quant_pre_process to preprocess will result in error even if i set skip_optimization=True
image

after that, i use quantize_dynamic, it successfully quantize the model to int8, but it fails to load it back
image

To reproduce

image
image

Urgency

Urgent, paper diliver deadline is coming !

Platform

Linux

OS Version

ubuntu22.04

ONNX Runtime Installation

Released Package

ONNX Runtime Version or Commit ID

1.17

ONNX Runtime API

Python

Architecture

X64

Execution Provider

CUDA

Execution Provider Library Version

CUDA11.8

@github-actions github-actions bot added the ep:CUDA issues related to the CUDA execution provider label May 30, 2024
@xadupre
Copy link
Member

xadupre commented Jun 3, 2024

Did you try to see if it works with onnxruntime==1.18?

@JackWeiw
Copy link
Author

JackWeiw commented Jun 4, 2024

Did you try to see if it works with onnxruntime==1.18?

I switch to onnxruntime==1.18, there it still return the same error when i try to pre-process
image
if i simply use quantize_dynamic, it works fine, but it fails to check_model
image
I set op_version as default(14) when export from PyTorch, my torch version is torch2.3-cu11.8.
Do you have any insights?

@xadupre
Copy link
Member

xadupre commented Jun 4, 2024

Are you using the latest onnx package?

@JackWeiw
Copy link
Author

JackWeiw commented Jun 5, 2024

Are you using the latest onnx package?

I have updated onnx to 1.16.1, onnxruntime to 1.18.0, than it succeed in quantization
image
howerver, when i tried to run it in onnxruntime, it report
image

@sophies927 sophies927 added the quantization issues related to quantization label Jun 6, 2024
Copy link
Contributor

github-actions bot commented Jul 7, 2024

This issue has been automatically marked as stale due to inactivity and will be closed in 30 days if no further activity occurs. If further support is needed, please provide an update and/or more details.

@github-actions github-actions bot added the stale issues that have not been addressed in a while; categorized by a bot label Jul 7, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
ep:CUDA issues related to the CUDA execution provider quantization issues related to quantization stale issues that have not been addressed in a while; categorized by a bot
Projects
None yet
Development

No branches or pull requests

3 participants