Skip to content

Does it support the gguf format model of Qwen2-VL-2B-Instruct #1895

Open
@helloHKTK

Description

@helloHKTK

Is your feature request related to a problem? Please describe.
I would like to support whether the current llama-cpp-python supports the gguf format model of Qwen2-VL-2B-Instruct?

Describe the solution you'd like
Does it support the gguf format model of Qwen2-VL-2B-Instruct.

Describe alternatives you've considered
Does it support the gguf format model of Qwen2-VL-2B-Instruct.

Additional context
Add any other context or screenshots about the feature request here.

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions