-
-
Notifications
You must be signed in to change notification settings - Fork 4.9k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[Bug]: Pixtral + guided_json fails with Internal Server Error #8429
Comments
Can you show the server-side stack trace? |
I did, it only shows that "Internal Server Error" as I mentioned there's literally nothing else in the server trace except that. No stack trace etc. |
To better debug the issue, can you use guided decoding in offline inference via |
i have the same issue but the environment i have access to; the following just hangs :(. from vllm import LLM
llm = LLM(
model="/root/models/mistralai/Pixtral-12B-2409",
tokenizer_mode="mistral",
served_model_name="mistralai/Pixtral-12B-2409",
max_model_len=5*4096,
guided_decoding_backend="outlines",
limit_mm_per_prompt={"image": 5},
tensor_parallel_size=4,
) |
I don't think guided_decoding/outlines officially supports mistral tokenizer (we still need to double check on this), and I don't think it's really vLLM's responsibility to make sure they work with each other if they don't. However, if they are indeed incompatible, then we should disable guided_decoding when mistral tokenizer is present. Perhaps @patrickvonplaten you might have some thoughts for this? |
For now can we raise a NotImplementedError? Make with a error message that asks for a contribution if people are interested in this feature? |
Yea, I think that's a good idea and something rather straightforward to do! |
The latest code of
@stikkireddy your code should run now, if you switch the
|
Your current environment
All seems to work for sending an image query. But as soon as I try any simple guided_json or guided_choice, it always fails.
gives:
and vllm shows:
Model Input Dumps
No response
🐛 Describe the bug
See above.
Before submitting a new issue...
The text was updated successfully, but these errors were encountered: