Skip to content

Commit 869e829

Browse files
[doc] add doc to explain how to use uv (#11773)
Signed-off-by: youkaichao <youkaichao@gmail.com> Co-authored-by: Cyrus Leung <cyrus.tl.leung@gmail.com>
1 parent 8f37be3 commit 869e829

File tree

1 file changed

+52
-15
lines changed

1 file changed

+52
-15
lines changed

docs/source/getting_started/installation/gpu-cuda.md

Lines changed: 52 additions & 15 deletions
Original file line numberDiff line numberDiff line change
@@ -12,24 +12,43 @@ vLLM is a Python library that also contains pre-compiled C++ and CUDA (12.1) bin
1212

1313
## Install released versions
1414

15-
You can install vLLM using pip:
15+
### Create a new Python environment
16+
17+
You can create a new Python environment using `conda`:
1618

1719
```console
1820
$ # (Recommended) Create a new conda environment.
1921
$ conda create -n myenv python=3.12 -y
2022
$ conda activate myenv
21-
22-
$ # Install vLLM with CUDA 12.1.
23-
$ pip install vllm
2423
```
2524

2625
```{note}
27-
Although we recommend using `conda` to create and manage Python environments, it is highly recommended to use `pip` to install vLLM. This is because `pip` can install `torch` with separate library packages like `NCCL`, while `conda` installs `torch` with statically linked `NCCL`. This can cause issues when vLLM tries to use `NCCL`. See <gh-issue:8420> for more details.
26+
[PyTorch has deprecated the conda release channel](https://github.com/pytorch/pytorch/issues/138506). If you use `conda`, please only use it to create Python environment rather than installing packages. In particular, the PyTorch installed via `conda` will statically link `NCCL` library, which can cause issues when vLLM tries to use `NCCL`. See <gh-issue:8420> for more details.
27+
```
28+
29+
Or you can create a new Python environment using [uv](https://docs.astral.sh/uv/), a very fast Python environment manager. Please follow the [documentation](https://docs.astral.sh/uv/#getting-started) to install `uv`. After installing `uv`, you can create a new Python environment using the following command:
30+
31+
```console
32+
$ # (Recommended) Create a new uv environment. Use `--seed` to install `pip` and `setuptools` in the environment.
33+
$ uv venv myenv --python 3.12 --seed
34+
$ source myenv/bin/activate
35+
```
36+
37+
In order to be performant, vLLM has to compile many cuda kernels. The compilation unfortunately introduces binary incompatibility with other CUDA versions and PyTorch versions, even for the same PyTorch version with different building configurations.
38+
39+
Therefore, it is recommended to install vLLM with a **fresh new** environment. If either you have a different CUDA version or you want to use an existing PyTorch installation, you need to build vLLM from source. See [below](#build-from-source) for more details.
40+
41+
### Install vLLM
42+
43+
You can install vLLM using either `pip` or `uv pip`:
44+
45+
```console
46+
$ # Install vLLM with CUDA 12.1.
47+
$ pip install vllm # If you are using pip.
48+
$ uv pip install vllm # If you are using uv.
2849
```
2950

30-
````{note}
31-
As of now, vLLM's binaries are compiled with CUDA 12.1 and public PyTorch release versions by default.
32-
We also provide vLLM binaries compiled with CUDA 11.8 and public PyTorch release versions:
51+
As of now, vLLM's binaries are compiled with CUDA 12.1 and public PyTorch release versions by default. We also provide vLLM binaries compiled with CUDA 11.8 and public PyTorch release versions:
3352

3453
```console
3554
$ # Install vLLM with CUDA 11.8.
@@ -38,22 +57,19 @@ $ export PYTHON_VERSION=310
3857
$ pip install https://github.com/vllm-project/vllm/releases/download/v${VLLM_VERSION}/vllm-${VLLM_VERSION}+cu118-cp${PYTHON_VERSION}-cp${PYTHON_VERSION}-manylinux1_x86_64.whl --extra-index-url https://download.pytorch.org/whl/cu118
3958
```
4059

41-
In order to be performant, vLLM has to compile many cuda kernels. The compilation unfortunately introduces binary incompatibility with other CUDA versions and PyTorch versions, even for the same PyTorch version with different building configurations.
42-
43-
Therefore, it is recommended to install vLLM with a **fresh new** conda environment. If either you have a different CUDA version or you want to use an existing PyTorch installation, you need to build vLLM from source. See below for instructions.
44-
````
45-
4660
(install-the-latest-code)=
4761

4862
## Install the latest code
4963

50-
LLM inference is a fast-evolving field, and the latest code may contain bug fixes, performance improvements, and new features that are not released yet. To allow users to try the latest code without waiting for the next release, vLLM provides wheels for Linux running on a x86 platform with CUDA 12 for every commit since `v0.5.3`. You can download and install it with the following command:
64+
LLM inference is a fast-evolving field, and the latest code may contain bug fixes, performance improvements, and new features that are not released yet. To allow users to try the latest code without waiting for the next release, vLLM provides wheels for Linux running on a x86 platform with CUDA 12 for every commit since `v0.5.3`.
65+
66+
### Install the latest code using `pip`
5167

5268
```console
5369
$ pip install https://vllm-wheels.s3.us-west-2.amazonaws.com/nightly/vllm-1.0.0.dev-cp38-abi3-manylinux1_x86_64.whl
5470
```
5571

56-
If you want to access the wheels for previous commits, you can specify the commit hash in the URL:
72+
If you want to access the wheels for previous commits (e.g. to bisect the behavior change, performance regression), you can specify the commit hash in the URL:
5773

5874
```console
5975
$ export VLLM_COMMIT=33f460b17a54acb3b6cc0b03f4a17876cff5eafd # use full commit hash from the main branch
@@ -62,6 +78,27 @@ $ pip install https://vllm-wheels.s3.us-west-2.amazonaws.com/${VLLM_COMMIT}/vllm
6278

6379
Note that the wheels are built with Python 3.8 ABI (see [PEP 425](https://peps.python.org/pep-0425/) for more details about ABI), so **they are compatible with Python 3.8 and later**. The version string in the wheel file name (`1.0.0.dev`) is just a placeholder to have a unified URL for the wheels. The actual versions of wheels are contained in the wheel metadata. Although we don't support Python 3.8 any more (because PyTorch 2.5 dropped support for Python 3.8), the wheels are still built with Python 3.8 ABI to keep the same wheel name as before.
6480

81+
Due to the limitation of `pip`, you have to specify the full URL of the wheel file.
82+
83+
### Install the latest code using `uv`
84+
85+
Another way to install the latest code is to use `uv`:
86+
87+
```console
88+
$ uv pip install vllm --extra-index-url https://wheels.vllm.ai/nightly
89+
```
90+
91+
If you want to access the wheels for previous commits (e.g. to bisect the behavior change, performance regression), you can specify the commit hash in the URL:
92+
93+
```console
94+
$ export VLLM_COMMIT=72d9c316d3f6ede485146fe5aabd4e61dbc59069 # use full commit hash from the main branch
95+
$ uv pip install vllm --extra-index-url https://wheels.vllm.ai/${VLLM_COMMIT}
96+
```
97+
98+
The `uv` approach works for vLLM `v0.6.6` and later and offers an easy-to-remember command. A unique feature of `uv` is that packages in `--extra-index-url` have [higher priority than the default index](https://docs.astral.sh/uv/pip/compatibility/#packages-that-exist-on-multiple-indexes). If the latest public release is `v0.6.6.post1`, `uv`'s behavior allows installing a commit before `v0.6.6.post1` by specifying the `--extra-index-url`. In contrast, `pip` combines packages from `--extra-index-url` and the default index, choosing only the latest version, which makes it difficult to install a development version prior to the released version.
99+
100+
### Install the latest code using `docker`
101+
65102
Another way to access the latest code is to use the docker images:
66103

67104
```console

0 commit comments

Comments
 (0)