Skip to content

Commit 7a0823f

Browse files
pcmoritzjimpang
authored andcommitted
Add documentation on how to do incremental builds (vllm-project#2796)
1 parent 88483a6 commit 7a0823f

File tree

2 files changed

+15
-0
lines changed

2 files changed

+15
-0
lines changed

docs/source/getting_started/installation.rst

Lines changed: 10 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -67,3 +67,13 @@ You can also build and install vLLM from source:
6767
6868
$ # Use `--ipc=host` to make sure the shared memory is large enough.
6969
$ docker run --gpus all -it --rm --ipc=host nvcr.io/nvidia/pytorch:23.10-py3
70+
71+
.. note::
72+
If you are developing the C++ backend of vLLM, consider building vLLM with
73+
74+
.. code-block:: console
75+
76+
$ python setup.py develop
77+
78+
since it will give you incremental builds. The downside is that this method
79+
is `deprecated by setuptools <https://github.com/pypa/setuptools/issues/917>`_.

setup.py

Lines changed: 5 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -15,6 +15,11 @@
1515

1616
ROOT_DIR = os.path.dirname(__file__)
1717

18+
# If you are developing the C++ backend of vLLM, consider building vLLM with
19+
# `python setup.py develop` since it will give you incremental builds.
20+
# The downside is that this method is deprecated, see
21+
# https://github.com/pypa/setuptools/issues/917
22+
1823
MAIN_CUDA_VERSION = "12.1"
1924

2025
# Supported NVIDIA GPU architectures.

0 commit comments

Comments
 (0)