Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add support for Llava-Next (v1.6) #43

Merged
merged 2 commits into from
Jun 22, 2024
Merged

Add support for Llava-Next (v1.6) #43

merged 2 commits into from
Jun 22, 2024

Conversation

Blaizzy
Copy link
Owner

@Blaizzy Blaizzy commented Jun 22, 2024

Changes:

  • Add Llava-Next

Closes #42

@Blaizzy Blaizzy marked this pull request as ready for review June 22, 2024 00:19
@Blaizzy Blaizzy merged commit ec78bd3 into main Jun 22, 2024
1 check passed
@jrp2014
Copy link

jrp2014 commented Jun 28, 2024

Is there a README / example usage script for this one, please?

@Blaizzy
Copy link
Owner Author

Blaizzy commented Jun 28, 2024

It works just like llava-1.5

import mlx.core as mx
from mlx_vlm import load, generate

model_path = "mlx-community/llava-1.6-mistral-7b-4bit"
model, processor = load(model_path)

prompt = processor.tokenizer.apply_chat_template(
    [{"role": "user", "content": f"<image>\nWhat are these?"}],
    tokenize=False,
    add_generation_prompt=True,
)

output = generate(model, processor, "http://images.cocodataset.org/val2017/000000039769.jpg", prompt, verbose=False)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

Llava v1.6 support
2 participants