Skip to content

Qwen/Qwen3-VL-30B-A3B-Instruct GPTQ #902

@JartX

Description

@JartX

Hi! @wenhuach21 Could you tell me how I could quantize the Qwen/Qwen3-VL-30B-A3B-Instruct model with GPTQ? After following the steps at:
https://huggingface.co/Intel/Qwen3-VL-235B-A22B-Instruct-int4-AutoRound
I get:

https://pastebin.com/wEXV63WT

And therefore, the model can't be loaded.

Thank you very much :)

My change is auto_round to auto_gptq, many thanks!

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions