You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: site-src/guides/index.md
+10-10Lines changed: 10 additions & 10 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -4,7 +4,7 @@
4
4
5
5
This project is still in an alpha state and breaking changes may occur in the future.
6
6
7
-
This quickstart guide is intended for engineers familiar with k8s and model servers (vLLM in this instance). The goal of this guide is to get an Inference Gateway up and running!
7
+
This quickstart guide is intended for engineers familiar with k8s and model servers (vLLM in this instance). The goal of this guide is to get an Inference Gateway up and running!
8
8
9
9
## **Prerequisites**
10
10
@@ -35,7 +35,7 @@ This quickstart guide is intended for engineers familiar with k8s and model serv
35
35
36
36
For this setup, you will need 3 GPUs to run the sample model server. Adjust the number of replicas in `./config/manifests/vllm/gpu-deployment.yaml` as needed.
37
37
Create a Hugging Face secret to download the model [meta-llama/Llama-3.1-8B-Instruct](https://huggingface.co/meta-llama/Llama-3.1-8B-Instruct). Ensure that the token grants access to this model.
38
-
38
+
39
39
Deploy a sample vLLM deployment with the proper protocol to work with the LLM Instance Gateway.
40
40
41
41
```bash
@@ -46,11 +46,11 @@ This quickstart guide is intended for engineers familiar with k8s and model serv
46
46
=== "CPU-Based Model Server"
47
47
48
48
This setup is using the formal `vllm-cpu` image, which according to the documentation can run vLLM on x86 CPU platform.
49
-
For this setup, we use approximately 9.5GB of memory and 12 CPUs for each replica.
50
-
49
+
For this setup, we use approximately 9.5GB of memory and 12 CPUs for each replica.
50
+
51
51
While it is possible to deploy the model server with less resources, this is not recommended. For example, in our tests, loading the model using 8GB of memory and 1 CPU was possible but took almost 3.5 minutes and inference requests took unreasonable time. In general, there is a tradeoff between the memory and CPU we allocate to our pods and the performance. The more memory and CPU we allocate the better performance we can get.
52
-
53
-
After running multiple configurations of these values we decided in this sample to use 9.5GB of memory and 12 CPUs for each replica, which gives reasonable response times. You can increase those numbers and potentially may even get better response times. For modifying the allocated resources, adjust the numbers in [cpu-deployment.yaml](https://github.com/kubernetes-sigs/gateway-api-inference-extension/raw/main/config/manifests/vllm/cpu-deployment.yaml) as needed.
52
+
53
+
After running multiple configurations of these values we decided in this sample to use 9.5GB of memory and 12 CPUs for each replica, which gives reasonable response times. You can increase those numbers and potentially may even get better response times. For modifying the allocated resources, adjust the numbers in [cpu-deployment.yaml](https://github.com/kubernetes-sigs/gateway-api-inference-extension/raw/main/config/manifests/vllm/cpu-deployment.yaml) as needed.
54
54
55
55
Deploy a sample vLLM deployment with the proper protocol to work with the LLM Instance Gateway.
56
56
@@ -104,7 +104,7 @@ This quickstart guide is intended for engineers familiar with k8s and model serv
104
104
105
105
=== "GKE"
106
106
107
-
1. Enable the Gateway API and configure proxy-only subnets when necessary. See [Deploy Gateways](https://cloud.google.com/kubernetes-engine/docs/how-to/deploying-gateways)
107
+
1. Enable the Gateway API and configure proxy-only subnets when necessary. See [Deploy Gateways](https://cloud.google.com/kubernetes-engine/docs/how-to/deploying-gateways)
108
108
for detailed instructions.
109
109
110
110
1. Deploy Gateway and HealthCheckPolicy resources
@@ -141,17 +141,17 @@ This quickstart guide is intended for engineers familiar with k8s and model serv
141
141
142
142
=== "Istio"
143
143
144
-
Please note that this feature is currently in an experimental phase and is not intended for production use.
144
+
Please note that this feature is currently in an experimental phase and is not intended for production use.
145
145
The implementation and user experience are subject to changes as we continue to iterate on this project.
146
146
147
147
1. Requirements
148
148
149
149
- Gateway API [CRDs](https://gateway-api.sigs.k8s.io/guides/#installing-gateway-api) installed.
0 commit comments