We read every piece of feedback, and take your input very seriously.
To see all available qualifiers, see our documentation.
There was an error while loading. Please reload this page.
1 parent 2e090bd commit dff6800Copy full SHA for dff6800
examples/offline_inference/basic/README.md
@@ -70,7 +70,7 @@ Try one yourself by passing one of the following models to the `--model` argumen
70
71
vLLM supports models that are quantized using GGUF.
72
73
-Try one yourself by downloading a GUFF quantised model and using the following arguments:
+Try one yourself by downloading a quantized GGUF model and using the following arguments:
74
75
```python
76
from huggingface_hub import hf_hub_download
0 commit comments