Open
Description
Is your feature request related to a problem? Please describe.
I would like to support whether the current llama-cpp-python supports the gguf format model of Qwen2-VL-2B-Instruct?
Describe the solution you'd like
Does it support the gguf format model of Qwen2-VL-2B-Instruct.
Describe alternatives you've considered
Does it support the gguf format model of Qwen2-VL-2B-Instruct.
Additional context
Add any other context or screenshots about the feature request here.
Metadata
Metadata
Assignees
Labels
No labels