Skip to content

[Feature]: Qwen2.5 bitsandbytes support #8941

@hanan9m

Description

@hanan9m

🚀 The feature, motivation and pitch

Description:
Qwen2.5 (32B) is a state-of-the-art model, especially interesting in 4-bit precision (bitsandbytes).

  • I tried integrating it, but the model did not work as expected. the model output is just "!!!!!"
  • I created a Colab showing Qwen2.5 works in the transformers library but fails in vllm after my modification.
    in this notebook i show how the model is working using hugginface, and how after adding bitsandbytes support the output is gibberish
    i tried to add this lines, under Qwen2ForCausalLM class:
    bitsandbytes_stacked_params_mapping = {
        # shard_name, weight_name, index
        "q_proj": ("qkv_proj", 0),
        "k_proj": ("qkv_proj", 1),
        "v_proj": ("qkv_proj", 2),
        "gate_proj": ("gate_up_proj", 0),
        "up_proj": ("gate_up_proj", 1),
    }
  • There is similar PR just merge where adding bitsandbytes to Gemma2

bad output example

Prompt: 'The future of AI is', Generated text: '!!!!!!!!!!!!!!!!'

Alternatives

No response

Additional context

No response

Before submitting a new issue...

  • Make sure you already searched for relevant issues, and asked the chatbot living at the bottom right corner of the documentation page, which can answer lots of frequently asked questions.

Metadata

Metadata

Assignees

No one assigned

    Labels

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions