-
-
Notifications
You must be signed in to change notification settings - Fork 4.9k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[Bug]: vllm overrides transformer's Autoconfig for mllama #9076
Comments
We override it only for changing |
@heheda12345 but it does change the behavior of AutoConfig if vllm is imported together |
We've had temporary workaround by explicitly using transformers.MllamaForConditionalGeneration to load the mllama model. But we believe a fix from vLLM side could benefit the other vLLM users who aren't aware of this override |
Your current environment
vllm 0.6.2
Model Input Dumps
No response
🐛 Describe the bug
This line overrides transformer's autoconfig for mllama, which should be removed
vllm/vllm/transformers_utils/config.py
Line 41 in e5dc713
Before submitting a new issue...
The text was updated successfully, but these errors were encountered: