-
-
Notifications
You must be signed in to change notification settings - Fork 4.4k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[Bug]: MistralTokenizer object has no attribute 'get_vocab' #8358
Comments
Can you check whether #8364 can solve this issue? If the issue still persists, @patrickvonplaten is the mistral tokenizer intended to work with guided decoding? |
At the moment we do not have support for guided decoding. I can take a look though to see if it's easy to implement. I think it should be simple to add a |
|
With the fix, now I get this error (same with Forced to run in debugging because the API returns a 500 error without extra informations |
Also linking this issue here: #8429 (comment) |
Closing as completed by #9188 |
Your current environment
The output of `python collect_env.py`
🐛 Describe the bug
I try to use
guided_json
orresponse_format
in request via vLLM server withMistral-Nemo-Instruct-2407
and--tokenizer-mode mistral
but get anAttributeError
:AttributeError: 'MistralTokenizer' object has no attribute 'get_vocab'
Launch the server :
Request in python with
response_format
:Same with
guided_json
:Get the same
AttributeError
:Update : With the argument
guided-decoding-backend = lm-format-enforcer
, I get a TypeErrorBefore submitting a new issue...
The text was updated successfully, but these errors were encountered: