-
-
Notifications
You must be signed in to change notification settings - Fork 5.4k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[Model] support bitsandbytes quantization with minicpm3 model #10682
[Model] support bitsandbytes quantization with minicpm3 model #10682
Conversation
👋 Hi! Thank you for contributing to the vLLM project. Once the PR is approved and ready to go, your PR reviewer(s) can run CI to test the changes comprehensively before merging. To run CI, PR reviewers can do one of these:
🚀 |
f5eee26
to
7d73b1b
Compare
Can you link to an example HF repo that uses this quantization? |
Signed-off-by: Ubuntu <zixuanzhang@bytedance.com>
7d73b1b
to
fab405b
Compare
|
Oh, I forgot that you can perform in-flight quantization with bitsandbytes. Never mind. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks for your contribution, I can generate the reasonable result locally (TP=1/2)
Hello @jeejeelee, would you please merge the code? Thank you! |
…roject#10682) Signed-off-by: Ubuntu <zixuanzhang@bytedance.com> Signed-off-by: Andrew Feldman <afeldman@neuralmagic.com>
…roject#10682) Signed-off-by: Ubuntu <zixuanzhang@bytedance.com>
…roject#10682) Signed-off-by: Ubuntu <zixuanzhang@bytedance.com>
No description provided.