Skip to content

Commit 7b69f70

Browse files
PatrykWotthaddey
andauthored
Cherrypick docker vllm: update readme (#1525) (#1538)
Cherry pick of the docker vllm: update readme from habana_main Signed-off-by: Tomasz Thaddey <tthaddey@habana.ai> Signed-off-by: Artur Fierka <artur.fierka@intel.com> Co-authored-by: Tomasz Thaddey <76682475+tthaddey@users.noreply.github.com>
1 parent 79ef0d5 commit 7b69f70

File tree

1 file changed

+120
-77
lines changed

1 file changed

+120
-77
lines changed

.cd/README.md

Lines changed: 120 additions & 77 deletions
Original file line numberDiff line numberDiff line change
@@ -1,6 +1,7 @@
11
# vLLM for Gaudi – Quick Start
22

3-
This guide explains how to quickly run vLLM with multi-model support on Gaudi using a prebuilt Docker image.
3+
This guide explains how to quickly run vLLM on Gaudi using a prebuilt Docker image and Docker Compose, with options for custom parameters and benchmarking.
4+
Supports a wide range of validated models including LLaMa, Mistral, and Qwen families, with flexible configuration via environment variables or YAML files.
45

56
## Supported Models
67

@@ -25,113 +26,155 @@ This guide explains how to quickly run vLLM with multi-model support on Gaudi us
2526

2627
## How to Use
2728

28-
1. **Use the prebuilt vLLM container**
29+
### 1. Run the server using Docker Compose
2930

30-
You do **not** need to build the Docker image yourself.
31-
Use the ready-to-use image from an image registry:
31+
The recommended and easiest way to start the vLLM server is with Docker Compose. At a minimum, set the following environment variables:
32+
33+
- `MODEL` - Select a model from the table above.
34+
- `HF_TOKEN` - Your Hugging Face token (generate one at <https://huggingface.co>).
35+
- `DOCKER_IMAGE` - The vLLM Docker image URL from Gaudi or local repository.
36+
37+
**Example usage:**
38+
39+
```bash
40+
cd vllm-fork/.cd/
41+
MODEL="Qwen/Qwen2.5-14B-Instruct" \
42+
HF_TOKEN="<your huggingface token>" \
43+
DOCKER_IMAGE="<docker image url>" \
44+
docker compose up
45+
```
46+
47+
### 2. Running the Server with a Benchmark
48+
49+
To easily initiate benchmark dedicated for a specific model using default parameters, use the `--profile benchmark up` option with Docker Compose:
3250

3351
```bash
34-
docker pull <path to a docker image>
52+
cd vllm-fork/.cd/
53+
MODEL="Qwen/Qwen2.5-14B-Instruct" \
54+
HF_TOKEN="<your huggingface token>" \
55+
DOCKER_IMAGE="<docker image url>" \
56+
docker compose --profile benchmark up
3557
```
3658

37-
2. **Set required environment variables**
59+
This launches the vLLM server and runs the benchmark suite automatically.
3860

39-
- `export MODEL=` (choose from the table above)
40-
- `export HF_TOKEN=` (your huggingface token, can be generated from https://huggingface.co)
61+
### 3. Run the server using Docker Compose with custom parameters
4162

42-
Tips:
43-
- Model files can be large. For best performance, use an external disk for the Huggingface cache and set `HF_HOME` accordingly.
44-
Example: `-e HF_HOME=/mnt/huggingface -v /mnt/huggingface:/mnt`\
45-
- For a quick startup and to skip the initial model warmup (useful for development testing), you can add:
46-
`-e VLLM_SKIP_WARMUP=true`
63+
To override default settings, you can provide additional parameters when starting the server. This is a more advanced approach:
4764

48-
3. **Run the vLLM server**
65+
- `PT_HPU_LAZY_MODE` - Enables lazy execution mode for HPU (Habana Processing Unit), which may improve performance by batching operations.
66+
- `VLLM_SKIP_WARMUP` - If enabled, skips the model warmup phase, which can reduce startup time but may affect initial performance.
67+
- `MAX_MODEL_LEN` - Specifies the maximum sequence length the model can handle.
68+
- `MAX_NUM_SEQS` - Sets the maximum number of sequences that can be processed simultaneously.
69+
- `TENSOR_PARALLEL_SIZE` - Defines the number of parallel tensor partitions.
70+
- `VLLM_EXPONENTIAL_BUCKETING` - Controls enabling/disabling of exponential bucketing warmup strategy.
71+
- `VLLM_DECODE_BLOCK_BUCKET_STEP` - Sets the step size for allocating decode blocks during inference, affecting memory allocation granularity.
72+
- `VLLM_DECODE_BS_BUCKET_STEP` - Determines the batch size step for decode operations, influencing how batches are grouped and processed.
73+
- `VLLM_PROMPT_BS_BUCKET_STEP` - Sets the batch size step for prompt processing, impacting how prompt batches are handled.
74+
- `VLLM_PROMPT_SEQ_BUCKET_STEP` - Controls the step size for prompt sequence allocation, affecting how sequences are bucketed for processing.
75+
76+
**Example usage:**
4977

5078
```bash
51-
docker run -it --rm \
52-
-e MODEL=$MODEL \
53-
-e http_proxy=$http_proxy -e https_proxy=$https_proxy -e no_proxy=$no_proxy \
54-
--cap-add=sys_nice \
55-
--ipc=host \
56-
--runtime=habana \
57-
-e HF_TOKEN=$HF_TOKEN \
58-
-e HABANA_VISIBLE_DEVICES=all \
59-
-p 8000:8000 \
60-
--name vllm-server \
61-
<docker image name>
79+
cd vllm-fork/.cd/
80+
MODEL="Qwen/Qwen2.5-14B-Instruct" \
81+
HF_TOKEN="<your huggingface token>" \
82+
DOCKER_IMAGE="<docker image url>" \
83+
TENSOR_PARALLEL_SIZE=1 \
84+
MAX_MODEL_LEN=2048 \
85+
docker compose up
6286
```
6387

64-
4. **(Optional) Test the server**
88+
### 4. Running the Server and Benchmark with Custom Parameters
6589

66-
In a separate terminal:
90+
You can customize benchmark parameters using:
91+
92+
- `INPUT_TOK` – Number of input tokens per prompt.
93+
- `OUTPUT_TOK` – Number of output tokens to generate per prompt.
94+
- `CON_REQ` – Number of concurrent requests to send during benchmarking.
95+
- `NUM_PROMPTS` – Total number of prompts to use in the benchmark.
96+
97+
**Example usage:**
6798

6899
```bash
69-
MODEL= # choose from the table above
70-
target=localhost
71-
curl_query="What is DeepLearning?"
72-
payload="{ \"model\": \"${MODEL}\", \"prompt\": \"${curl_query}\", \"max_tokens\": 128, \"temperature\": 0 }"
73-
curl -s --noproxy '*' http://${target}:8000/v1/completions -H 'Content-Type: application/json' -d "$payload"
100+
cd vllm-fork/.cd/
101+
MODEL="Qwen/Qwen2.5-14B-Instruct" \
102+
HF_TOKEN="<your huggingface token>" \
103+
DOCKER_IMAGE="<docker image url>" \
104+
INPUT_TOK=128 \
105+
OUTPUT_TOK=128 \
106+
CON_REQ=16 \
107+
NUM_PROMPTS=64 \
108+
docker compose --profile benchmark up
74109
```
75110

76-
5. **Customizing server parameters**
111+
This will launch the vLLM server and run the benchmark suite using your specified parameters.
112+
113+
### 5. Running the Server and Benchmark, both with Custom Parameters
77114

78-
You can override defaults with additional `-e` variables, for example:
115+
You can launch the vLLM server and benchmark together, specifying any combination of optional parameters for both the server and the benchmark. Set the desired environment variables before running Docker Compose.
116+
117+
**Example usage:**
79118

80119
```bash
81-
docker run -it --rm \
82-
-e MODEL=$MODEL \
83-
-e TENSOR_PARALLEL_SIZE=8 \
84-
-e MAX_MODEL_LEN=8192 \
85-
-e HABANA_VISIBLE_DEVICES=all \
86-
-e HF_TOKEN=$HF_TOKEN \
87-
-e http_proxy=$http_proxy -e https_proxy=$https_proxy -e no_proxy=$no_proxy \
88-
--runtime=habana \
89-
--cap-add=sys_nice \
90-
--ipc=host \
91-
-p 8000:8000 \
92-
--name vllm-server \
93-
<docker image name>
120+
cd vllm-fork/.cd/
121+
MODEL="Qwen/Qwen2.5-14B-Instruct" \
122+
HF_TOKEN="<your huggingface token>" \
123+
DOCKER_IMAGE="<docker image url>" \
124+
VTENSOR_PARALLEL_SIZE=1 \
125+
MAX_MODEL_LEN=2048 \
126+
INPUT_TOK=128 \
127+
OUTPUT_TOK=128 \
128+
CON_REQ=16 \
129+
NUM_PROMPTS=64 \
130+
docker compose --profile benchmark up
94131
```
95132

96-
6. **Running multiple instances**
133+
This command will start the vLLM server and run the benchmark suite using your specified custom parameters.
97134

98-
Each instance should have unique values for `HABANA_VISIBLE_DEVICES`, host port, and container name.
99-
See [docs.habana.ai - Multiple Tenants](https://docs.habana.ai/en/latest/Orchestration/Multiple_Tenants_on_HPU/Multiple_Dockers_each_with_Single_Workload.html) for details.
135+
### 6. Running the Server and Benchmark Using Configuration Files
100136

101-
Example for two instances:
137+
You can also configure the server and benchmark by specifying parameters in configuration files. To do this, set the following environment variables:
138+
139+
- `VLLM_SERVER_CONFIG_FILE` – Path to the server configuration file inside the Docker container.
140+
- `VLLM_SERVER_CONFIG_NAME` – Name of the server configuration section.
141+
- `VLLM_BENCHMARK_CONFIG_FILE` – Path to the benchmark configuration file inside the Docker container.
142+
- `VLLM_BENCHMARK_CONFIG_NAME` – Name of the benchmark configuration section.
143+
144+
**Example:**
102145

103146
```bash
104-
# Instance 1
105-
docker run -it --rm \
106-
-e MODEL=$MODEL \
107-
-e TENSOR_PARALLEL_SIZE=4 \
108-
-e HABANA_VISIBLE_DEVICES=0,1,2,3 \
109-
-e MAX_MODEL_LEN=8192 \
110-
-e http_proxy=$http_proxy -e https_proxy=$https_proxy -e no_proxy=$no_proxy \
111-
--runtime=habana \
112-
--cap-add=sys_nice \
113-
--ipc=host \
114-
-p 8000:8000 \
115-
--name vllm-server1 \
116-
<docker image name>
147+
HF_TOKEN=<your huggingface token> \
148+
VLLM_SERVER_CONFIG_FILE=server_configurations/server_text.yaml \
149+
VLLM_SERVER_CONFIG_NAME=llama31_8b_instruct \
150+
VLLM_BENCHMARK_CONFIG_FILE=benchmark_configurations/benchmark_text.yaml \
151+
VLLM_BENCHMARK_CONFIG_NAME=llama31_8b_instruct \
152+
docker compose --profile benchmark up
153+
```
117154

118-
# Instance 2 (in another terminal)
155+
> [!NOTE]
156+
> When using configuration files, you do not need to set the `MODEL` environment variable, as the model name is specified within the configuration file. However, you must still provide your `HF_TOKEN`.
157+
158+
### 7. Running the Server Directly with Docker
159+
160+
For full control, you can run the server using the `docker run` command. This approach allows you to specify any native Docker parameters as needed.
161+
162+
**Example:**
163+
164+
```bash
119165
docker run -it --rm \
120166
-e MODEL=$MODEL \
121-
-e TENSOR_PARALLEL_SIZE=4 \
122-
-e HABANA_VISIBLE_DEVICES=4,5,6,7 \
123-
-e MAX_MODEL_LEN=8192 \
124-
-e http_proxy=$http_proxy -e https_proxy=$https_proxy -e no_proxy=$no_proxy \
125-
--runtime=habana \
167+
-e HF_TOKEN=$HF_TOKEN \
168+
-e http_proxy=$http_proxy \
169+
-e https_proxy=$https_proxy \
170+
-e no_proxy=$no_proxy \
126171
--cap-add=sys_nice \
127172
--ipc=host \
128-
-p 9222:8000 \
129-
--name vllm-server2 \
173+
--runtime=habana \
174+
-e HABANA_VISIBLE_DEVICES=all \
175+
-p 8000:8000 \
176+
--name vllm-server \
130177
<docker image name>
131178
```
132179

133-
7. **Viewing logs**
134-
135-
```bash
136-
docker logs -f vllm-server
137-
```
180+
This method gives you full flexibility over Docker runtime options.

0 commit comments

Comments
 (0)