-
-
Notifications
You must be signed in to change notification settings - Fork 10k
Closed
Labels
feature requestNew feature or requestNew feature or request
Description
🚀 The feature, motivation and pitch
Description:
Qwen2.5 (32B) is a state-of-the-art model, especially interesting in 4-bit precision (bitsandbytes).
- I tried integrating it, but the model did not work as expected. the model output is just "!!!!!"
- I created a Colab showing Qwen2.5 works in the transformers library but fails in vllm after my modification.
in this notebook i show how the model is working using hugginface, and how after adding bitsandbytes support the output is gibberish
i tried to add this lines, underQwen2ForCausalLM
class:
bitsandbytes_stacked_params_mapping = {
# shard_name, weight_name, index
"q_proj": ("qkv_proj", 0),
"k_proj": ("qkv_proj", 1),
"v_proj": ("qkv_proj", 2),
"gate_proj": ("gate_up_proj", 0),
"up_proj": ("gate_up_proj", 1),
}
- There is similar PR just merge where adding bitsandbytes to Gemma2
bad output example
Prompt: 'The future of AI is', Generated text: '!!!!!!!!!!!!!!!!'
Alternatives
No response
Additional context
No response
Before submitting a new issue...
- Make sure you already searched for relevant issues, and asked the chatbot living at the bottom right corner of the documentation page, which can answer lots of frequently asked questions.
Metadata
Metadata
Assignees
Labels
feature requestNew feature or requestNew feature or request