You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
# Vector Inference: Easy inference on Slurm clusters
2
-
This repository provides an easy-to-use solution to run inference servers on [Slurm](https://slurm.schedmd.com/overview.html)-managed computing clusters using [vLLM](https://docs.vllm.ai/en/latest/). **All scripts in this repository runs natively on the Vector Institute cluster environment**. To adapt to other environments, update the config files in the `models` folder and the environment variables in the model launching scripts accordingly.
2
+
This repository provides an easy-to-use solution to run inference servers on [Slurm](https://slurm.schedmd.com/overview.html)-managed computing clusters using [vLLM](https://docs.vllm.ai/en/latest/). **All scripts in this repository runs natively on the Vector Institute cluster environment**. To adapt to other environments, update [`launch_server.sh`](vec-inf/launch_server.sh), [`vllm.slurm`](vec-inf/vllm.slurm), [`multinode_vllm.slurm`](vec-inf/multinode_vllm.slurm)and [`models.csv`](vec-inf/models/models.csv) accordingly.
3
3
4
4
## Installation
5
-
If you are using the Vector cluster environment, and you don't need any customization to the inference server environment, you can go to the next section as we have a default container environment in place. Otherwise, you might need up to 10GB of storage to setup your own virtual environment. The following steps needs to be run only once for each user.
6
-
7
-
1. Setup the virtual environment for running inference servers, run
5
+
If you are using the Vector cluster environment, and you don't need any customization to the inference server environment, run the following to install package:
8
6
```bash
9
-
bash venv.sh
7
+
pip install vec-inf
10
8
```
11
-
More details can be found in [venv.sh](venv.sh), make sure to adjust the commands to your environment if you're not using the Vector cluster.
9
+
Otherwise, we recommend using the provided [`Dockerfile`](Dockerfile)to set up your own environment with the package
12
10
13
-
2. Locate your virtual environment by running
11
+
## Launch an inference server
12
+
We will use the Llama 3.1 model as example, to launch an OpenAI compatible inference server for Meta-Llama-3.1-8B-Instruct, run:
14
13
```bash
15
-
poetry env info --path
14
+
vec-inf launch Meta-Llama-3.1-8B-Instruct
16
15
```
16
+
You should see an output like the following:
17
17
18
-
1. OPTIONAL: It is recommended to enable [FlashAttention](https://github.com/Dao-AILab/flash-attention) backend for better performance, run the following commands inside your environment to install:
19
-
```bash
20
-
pip install wheel
21
-
22
-
# Change the path according to your environment, this is an example for the Vector cluster
The model would be launched using the [default parameters](vec-inf/models/models.csv), you can override these values by providing additional options, use `--help` to see the full list.
21
+
If you'd like to see the Slurm logs, they are located in the `.vec-inf-logs` folder in your home directory. The log folder path can be modified by using the `--log-dir` option.
28
22
29
-
## Launch an inference server
30
-
We will use the Llama 3 model as example, to launch an inference server for Llama 3 8B, run
23
+
You can check the inference server status by providing the Slurm job ID to the `status` command:
31
24
```bash
32
-
bash src/launch_server.sh --model-family llama3
25
+
vec-inf status 13014393
33
26
```
27
+
34
28
You should see an output like the following:
35
-
> Job Name: vLLM/Meta-Llama-3-8B
36
-
>
37
-
> Partition: a40
38
-
>
39
-
> Generic Resource Scheduling: gpu:1
40
-
>
41
-
> Data Type: auto
42
-
>
43
-
> Submitted batch job 12217446
44
-
45
-
If you want to use your own virtual environment, you can run this instead:
46
-
```bash
47
-
bash src/launch_server.sh --model-family llama3 --venv $(poetry env info --path)
48
-
```
49
-
By default, the `launch_server.sh` script is set to use the 8B variant for Llama 3 based on the config file in `models/llama3` folder, you can switch to other variants with the `--model-variant` argument, and make sure to change the requested resource accordingly. More information about the flags and customizations can be found in the [`models`](models) folder. The inference server is compatible with the OpenAI `Completion` and `ChatCompletion` API. You can inspect the Slurm output files to check the inference server status.
50
29
51
-
Here is a more complicated example that launches a model variant using multiple nodes, say we want to launch Mixtral 8x22B, run
***PENDING**: Job submitted to Slurm, but not executed yet. Job pending reason will be shown.
35
+
***LAUNCHING**: Job is running but the server is not ready yet.
36
+
***READY**: Inference server running and ready to take requests.
37
+
***FAILED**: Inference server in an unhealthy state. Job failed reason will be shown.
38
+
***SHUTDOWN**: Inference server is shutdown/cancelled.
39
+
40
+
Note that the base URL is only available when model is in `READY` state, and if you've changed the Slurm log directory path, you also need to specify it when using the `status` command.
41
+
42
+
Finally, when you're finished using a model, you can shut it down by providing the Slurm job ID:
`launch`, `list`, and `status` command supports `--json-mode`, where the command output would be structured as a JSON string.
60
56
61
57
## Send inference requests
62
58
Once the inference server is ready, you can start sending in inference requests. We provide example scripts for sending inference requests in [`examples`](examples) folder. Make sure to update the model server URL and the model weights location in the scripts. For example, you can run `python examples/inference/llm/completions.py`, and you should expect to see an output like the following:
@@ -69,4 +65,4 @@ If you want to run inference from your local device, you can open a SSH tunnel t
The example provided above is for the vector cluster, change the variables accordingly for your environment
68
+
Where the last number in the URL is the GPU number (gpu029 in this case). The example provided above is for the vector cluster, change the variables accordingly for your environment
0 commit comments