Skip to content

Could we request support for a smallish (~4-5B param) modern vision LLM? LLava-1.6 or Nanollava? #988

Open
@kinchahoy

Description

@kinchahoy

🚀 The feature, motivation and pitch

Having good basic pytorch support for inferencing LLMs is key to continued success of pytorch. Vision LLM models tend to have uneven support on mainstream inferencing engines like Llama.cpp due to the need to reimplement CLIP/SIGLIP etc. Pytorch could natively support performant vision LLMs with quantization on ARM devices, which would make a big difference in usability.

Alternatives

No response

Additional context

No response

RFC (Optional)

No response

Metadata

Metadata

Assignees

No one assigned

    Labels

    enhancementNew feature or request

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions