Skip to content

Commit 156dfa5

Browse files
authored
Merge pull request #9 from VectorInstitute/develop
Develop
2 parents 2c43a25 + 254df3b commit 156dfa5

Some content is hidden

Large Commits have some content hidden by default. Use the searchbox below for content that may be hidden.

43 files changed

+2133
-1317
lines changed

.gitignore

Lines changed: 5 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -142,7 +142,7 @@ dmypy.json
142142
*.err
143143

144144
# Server url files
145-
.vLLM*
145+
*_url
146146

147147
logs/
148148

@@ -151,4 +151,7 @@ slurm/
151151
scripts/
152152

153153
# vLLM bug reporting files
154-
collect_env.py
154+
collect_env.py
155+
156+
# build files
157+
dist/

Dockerfile

Lines changed: 3 additions & 9 deletions
Original file line numberDiff line numberDiff line change
@@ -53,20 +53,14 @@ RUN python3.10 -m pip install --upgrade pip
5353
# Install Poetry using Python 3.10
5454
RUN python3.10 -m pip install poetry
5555

56-
# Clone the repository
57-
RUN git clone https://github.com/VectorInstitute/vector-inference /vec-inf
58-
59-
# Set the working directory
60-
WORKDIR /vec-inf
61-
62-
# Configure Poetry to not create virtual environments
56+
# Don't create venv
6357
RUN poetry config virtualenvs.create false
6458

6559
# Update Poetry lock file if necessary
6660
RUN poetry lock
6761

68-
# Install project dependencies via Poetry
69-
RUN poetry install
62+
# Install vec-inf
63+
RUN python3.10 -m pip install vec-inf[dev]
7064

7165
# Install Flash Attention 2 backend
7266
RUN python3.10 -m pip install flash-attn --no-build-isolation

README.md

Lines changed: 36 additions & 40 deletions
Original file line numberDiff line numberDiff line change
@@ -1,62 +1,58 @@
11
# Vector Inference: Easy inference on Slurm clusters
2-
This repository provides an easy-to-use solution to run inference servers on [Slurm](https://slurm.schedmd.com/overview.html)-managed computing clusters using [vLLM](https://docs.vllm.ai/en/latest/). **All scripts in this repository runs natively on the Vector Institute cluster environment**. To adapt to other environments, update the config files in the `models` folder and the environment variables in the model launching scripts accordingly.
2+
This repository provides an easy-to-use solution to run inference servers on [Slurm](https://slurm.schedmd.com/overview.html)-managed computing clusters using [vLLM](https://docs.vllm.ai/en/latest/). **All scripts in this repository runs natively on the Vector Institute cluster environment**. To adapt to other environments, update [`launch_server.sh`](vec-inf/launch_server.sh), [`vllm.slurm`](vec-inf/vllm.slurm), [`multinode_vllm.slurm`](vec-inf/multinode_vllm.slurm) and [`models.csv`](vec-inf/models/models.csv) accordingly.
33

44
## Installation
5-
If you are using the Vector cluster environment, and you don't need any customization to the inference server environment, you can go to the next section as we have a default container environment in place. Otherwise, you might need up to 10GB of storage to setup your own virtual environment. The following steps needs to be run only once for each user.
6-
7-
1. Setup the virtual environment for running inference servers, run
5+
If you are using the Vector cluster environment, and you don't need any customization to the inference server environment, run the following to install package:
86
```bash
9-
bash venv.sh
7+
pip install vec-inf
108
```
11-
More details can be found in [venv.sh](venv.sh), make sure to adjust the commands to your environment if you're not using the Vector cluster.
9+
Otherwise, we recommend using the provided [`Dockerfile`](Dockerfile) to set up your own environment with the package
1210

13-
2. Locate your virtual environment by running
11+
## Launch an inference server
12+
We will use the Llama 3.1 model as example, to launch an OpenAI compatible inference server for Meta-Llama-3.1-8B-Instruct, run:
1413
```bash
15-
poetry env info --path
14+
vec-inf launch Meta-Llama-3.1-8B-Instruct
1615
```
16+
You should see an output like the following:
1717

18-
1. OPTIONAL: It is recommended to enable [FlashAttention](https://github.com/Dao-AILab/flash-attention) backend for better performance, run the following commands inside your environment to install:
19-
```bash
20-
pip install wheel
21-
22-
# Change the path according to your environment, this is an example for the Vector cluster
23-
export CUDA_HOME=/pkgs/cuda-12.3
18+
<img width="450" alt="launch_img" src="https://github.com/user-attachments/assets/557eb421-47db-4810-bccd-c49c526b1b43">
2419

25-
pip install flash-attn --no-build-isolation
26-
pip install vllm-flash-attn
27-
```
20+
The model would be launched using the [default parameters](vec-inf/models/models.csv), you can override these values by providing additional options, use `--help` to see the full list.
21+
If you'd like to see the Slurm logs, they are located in the `.vec-inf-logs` folder in your home directory. The log folder path can be modified by using the `--log-dir` option.
2822

29-
## Launch an inference server
30-
We will use the Llama 3 model as example, to launch an inference server for Llama 3 8B, run
23+
You can check the inference server status by providing the Slurm job ID to the `status` command:
3124
```bash
32-
bash src/launch_server.sh --model-family llama3
25+
vec-inf status 13014393
3326
```
27+
3428
You should see an output like the following:
35-
> Job Name: vLLM/Meta-Llama-3-8B
36-
>
37-
> Partition: a40
38-
>
39-
> Generic Resource Scheduling: gpu:1
40-
>
41-
> Data Type: auto
42-
>
43-
> Submitted batch job 12217446
44-
45-
If you want to use your own virtual environment, you can run this instead:
46-
```bash
47-
bash src/launch_server.sh --model-family llama3 --venv $(poetry env info --path)
48-
```
49-
By default, the `launch_server.sh` script is set to use the 8B variant for Llama 3 based on the config file in `models/llama3` folder, you can switch to other variants with the `--model-variant` argument, and make sure to change the requested resource accordingly. More information about the flags and customizations can be found in the [`models`](models) folder. The inference server is compatible with the OpenAI `Completion` and `ChatCompletion` API. You can inspect the Slurm output files to check the inference server status.
5029

51-
Here is a more complicated example that launches a model variant using multiple nodes, say we want to launch Mixtral 8x22B, run
30+
<img width="450" alt="status_img" src="https://github.com/user-attachments/assets/7385b9ca-9159-4ca9-bae2-7e26d80d9747">
31+
32+
There are 5 possible states:
33+
34+
* **PENDING**: Job submitted to Slurm, but not executed yet. Job pending reason will be shown.
35+
* **LAUNCHING**: Job is running but the server is not ready yet.
36+
* **READY**: Inference server running and ready to take requests.
37+
* **FAILED**: Inference server in an unhealthy state. Job failed reason will be shown.
38+
* **SHUTDOWN**: Inference server is shutdown/cancelled.
39+
40+
Note that the base URL is only available when model is in `READY` state, and if you've changed the Slurm log directory path, you also need to specify it when using the `status` command.
41+
42+
Finally, when you're finished using a model, you can shut it down by providing the Slurm job ID:
5243
```bash
53-
bash src/launch_server.sh --model-family mixtral --model-variant 8x22B-v0.1 --num-nodes 2 --num-gpus 4
44+
vec-inf shutdown 13014393
45+
46+
> Shutting down model with Slurm Job ID: 13014393
5447
```
5548

56-
And for launching a multimodal model, here is an example for launching LLaVa-NEXT Mistral 7B (default variant)
49+
You call view the full list of available models by running the `list` command:
5750
```bash
58-
bash src/launch_server.sh --model-family llava-next --is-vlm
51+
vec-inf list
5952
```
53+
<img width="1200" alt="list_img" src="https://github.com/user-attachments/assets/a4f0d896-989d-43bf-82a2-6a6e5d0d288f">
54+
55+
`launch`, `list`, and `status` command supports `--json-mode`, where the command output would be structured as a JSON string.
6056

6157
## Send inference requests
6258
Once the inference server is ready, you can start sending in inference requests. We provide example scripts for sending inference requests in [`examples`](examples) folder. Make sure to update the model server URL and the model weights location in the scripts. For example, you can run `python examples/inference/llm/completions.py`, and you should expect to see an output like the following:
@@ -69,4 +65,4 @@ If you want to run inference from your local device, you can open a SSH tunnel t
6965
```bash
7066
ssh -L 8081:172.17.8.29:8081 username@v.vectorinstitute.ai -N
7167
```
72-
The example provided above is for the vector cluster, change the variables accordingly for your environment
68+
Where the last number in the URL is the GPU number (gpu029 in this case). The example provided above is for the vector cluster, change the variables accordingly for your environment

models/README.md

Lines changed: 0 additions & 48 deletions
This file was deleted.

models/codellama/config.sh

Lines changed: 0 additions & 5 deletions
This file was deleted.

models/command-r/config.sh

Lines changed: 0 additions & 5 deletions
This file was deleted.

models/dbrx/config.sh

Lines changed: 0 additions & 5 deletions
This file was deleted.

models/llama2/config.sh

Lines changed: 0 additions & 5 deletions
This file was deleted.

models/llama3/config.sh

Lines changed: 0 additions & 5 deletions
This file was deleted.

models/llava-1.5/chat_template.jinja

Lines changed: 0 additions & 23 deletions
This file was deleted.

models/llava-1.5/config.sh

Lines changed: 0 additions & 10 deletions
This file was deleted.

models/llava-next/chat_template.jinja

Lines changed: 0 additions & 23 deletions
This file was deleted.

models/llava-next/config.sh

Lines changed: 0 additions & 10 deletions
This file was deleted.

models/mistral/README.md

Lines changed: 0 additions & 7 deletions
This file was deleted.

models/mistral/config.sh

Lines changed: 0 additions & 5 deletions
This file was deleted.

models/mixtral/config.sh

Lines changed: 0 additions & 5 deletions
This file was deleted.

0 commit comments

Comments
 (0)