-
-
Notifications
You must be signed in to change notification settings - Fork 10.7k
Closed
Labels
usageHow to use vllmHow to use vllm
Description
Your current environment
BadRequestError: Error code: 400 - {'object': 'error', 'message': 'As of transformers v4.44, default chat template is no longer allowed, so you must provide a chat template if the tokenizer does not define one.', 'type': 'BadRequestError', 'param': None, 'code': 400}
How would you like to use vllm
CUDA_VISIBLE_DEVICES=1 vllm serve /ai/qwen1.5-1.8b.gguf --host 0.0.0.0 --port 10868 --max-model-len 4096 --trust-remote-code --tensor-parallel-size 1 --dtype=half --quantization gguf --load-format gguf
Before submitting a new issue...
- Make sure you already searched for relevant issues, and asked the chatbot living at the bottom right corner of the documentation page, which can answer lots of frequently asked questions.
RemaniTinpo, eugenezimin, cranehuang, MoritzLaurer, hammer-ai and 2 more
Metadata
Metadata
Assignees
Labels
usageHow to use vllmHow to use vllm