Hi! @wenhuach21 Could you tell me how I could quantize the Qwen/Qwen3-VL-30B-A3B-Instruct model with GPTQ? After following the steps at:
https://huggingface.co/Intel/Qwen3-VL-235B-A22B-Instruct-int4-AutoRound
I get:
https://pastebin.com/wEXV63WT
And therefore, the model can't be loaded.
Thank you very much :)
My change is auto_round to auto_gptq, many thanks!