Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Transformer model openbmb/MiniCPM-Llama3-V-2_5 not supported #910

Open
AmazingTurtle opened this issue Jun 19, 2024 · 2 comments
Open

Transformer model openbmb/MiniCPM-Llama3-V-2_5 not supported #910

AmazingTurtle opened this issue Jun 19, 2024 · 2 comments

Comments

@AmazingTurtle
Copy link

AmazingTurtle commented Jun 19, 2024

The bug
Loading and prompting the transformer model openbmb/MiniCPM-Llama3-V-2_5 does not work.
It tries to load the model (but according to nvtop nothing is allocated on my gpu). No error is thrown. Trying to prompt the LLM stops immediately without a response and without an error.

To Reproduce

from guidance import models
lm = models.Transformers('openbmb/MiniCPM-Llama3-V-2_5', trust_remote_code=True)

print(lm + "Hello?")

image

Worth to mention, that openbmb provided a test script for transformers, that does work

# test.py
import torch
from PIL import Image
from transformers import AutoModel, AutoTokenizer

model = AutoModel.from_pretrained('openbmb/MiniCPM-Llama3-V-2_5', trust_remote_code=True, torch_dtype=torch.float16)
model = model.to(device='cuda')

tokenizer = AutoTokenizer.from_pretrained('openbmb/MiniCPM-Llama3-V-2_5', trust_remote_code=True)
model.eval()

image = Image.open('xx.jpg').convert('RGB')
question = 'What is in the image?'
msgs = [{'role': 'user', 'content': question}]

res = model.chat(
    image=image,
    msgs=msgs,
    tokenizer=tokenizer,
    sampling=True, # if sampling=False, beam_search will be used by default
    temperature=0.7,
    # system_prompt='' # pass system_prompt if needed
)
print(res)
@riedgar-ms
Copy link
Collaborator

@nking-1 , have you come across this in your forays into multimodal models?

@hudson-ai
Copy link
Collaborator

I actually do get an error on my machine during a forward pass:
TypeError: MiniCPMV.forward() missing 1 required positional argument: 'data' (can include full traceback if helpful)

It seems that this model departs from the standard huggingface model-call API that we're using (likely because of multimodality).

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants