File tree Expand file tree Collapse file tree 2 files changed +15
-0
lines changed
docs/source/getting_started Expand file tree Collapse file tree 2 files changed +15
-0
lines changed Original file line number Diff line number Diff line change @@ -67,3 +67,13 @@ You can also build and install vLLM from source:
67
67
68
68
$ # Use `--ipc=host` to make sure the shared memory is large enough.
69
69
$ docker run --gpus all -it --rm --ipc=host nvcr.io/nvidia/pytorch:23.10-py3
70
+
71
+ .. note ::
72
+ If you are developing the C++ backend of vLLM, consider building vLLM with
73
+
74
+ .. code-block :: console
75
+
76
+ $ python setup.py develop
77
+
78
+ since it will give you incremental builds. The downside is that this method
79
+ is `deprecated by setuptools <https://github.com/pypa/setuptools/issues/917 >`_.
Original file line number Diff line number Diff line change 15
15
16
16
ROOT_DIR = os .path .dirname (__file__ )
17
17
18
+ # If you are developing the C++ backend of vLLM, consider building vLLM with
19
+ # `python setup.py develop` since it will give you incremental builds.
20
+ # The downside is that this method is deprecated, see
21
+ # https://github.com/pypa/setuptools/issues/917
22
+
18
23
MAIN_CUDA_VERSION = "12.1"
19
24
20
25
# Supported NVIDIA GPU architectures.
You can’t perform that action at this time.
0 commit comments