Skip to content

Commit cc38c0b

Browse files
committed
update according to comments
1 parent c0623e4 commit cc38c0b

File tree

5 files changed

+1
-13
lines changed

5 files changed

+1
-13
lines changed

CMakeLists.txt

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -220,7 +220,7 @@ endif()
220220

221221
if(ARCH STREQUAL "x86_64")
222222
if (NOT CMAKE_CUDA_ARCHITECTURES)
223-
if (${CMAKE_CUDA_COMPILER_VERSION} VERSION_LESS_EQUAL "13")
223+
if (${CMAKE_CUDA_COMPILER_VERSION} VERSION_LESS "13")
224224
set(CMAKE_CUDA_ARCHITECTURES 70-real 75-real) # V100, 2080
225225
endif()
226226
if (${CMAKE_CUDA_COMPILER_VERSION} VERSION_GREATER_EQUAL "11")

docs/en/faq.md

Lines changed: 0 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -20,8 +20,6 @@ It may have been caused by the following reasons.
2020
pip install lmdeploy[all]
2121
```
2222

23-
If you want to install the nightly build of LMDeploy's whl package, you can download and install it from the latest release at https://github.com/zhyncs/lmdeploy-build according to your CUDA and Python versions. Currently the update frequency of whl is once a day.
24-
2523
2. If you have installed it and still encounter this issue, it is probably because you are executing turbomind-related command in the root directory of lmdeploy source code. Switching to another directory will fix it.
2624

2725
But if you are a developer, you often need to develop and compile locally. The efficiency of installing whl every time is too low. You can specify the path of lib after compilation through symbolic links.

docs/en/get_started/installation.md

Lines changed: 0 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -28,10 +28,6 @@ export PYTHON_VERSION=310
2828
pip install https://github.com/InternLM/lmdeploy/releases/download/v${LMDEPLOY_VERSION}/lmdeploy-${LMDEPLOY_VERSION}+cu118-cp${PYTHON_VERSION}-cp${PYTHON_VERSION}-manylinux2014_x86_64.whl --extra-index-url https://download.pytorch.org/whl/cu118
2929
```
3030

31-
## Install nightly-build package with pip
32-
33-
The release frequency of LMDeploy is approximately once or twice monthly. If your desired feature has been merged to LMDeploy main branch but hasn't been published yet, you can experiment with the nightly-built package available [here](https://github.com/zhyncs/lmdeploy-build) according to your CUDA and Python versions
34-
3531
## Install from source
3632

3733
By default, LMDeploy will build with NVIDIA CUDA support, utilizing both the Turbomind and PyTorch backends. Before installing LMDeploy, ensure you have successfully installed the CUDA Toolkit.

docs/zh_cn/faq.md

Lines changed: 0 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -20,8 +20,6 @@ pip install --upgrade mmengine
2020
pip install lmdeploy[all]
2121
```
2222

23-
如果您想安装 LMDeploy 预编译包的 nightly 版本,可以根据您的 CUDA 和 Python 版本从 https://github.com/zhyncs/lmdeploy-build 下载并安装最新发布的包。目前更新频率是每天一次。
24-
2523
2. 如果已经安装了,还是出现这个问题,请检查下执行目录。不要在 lmdeploy 的源码根目录下执行 python -m lmdeploy.turbomind.\*下的package,换到其他目录下执行。
2624

2725
但是如果您是开发人员,通常需要在本地进行开发和编译。每次安装 whl 的效率太低了。您可以通过符号链接在编译后指定 lib 的路径。

docs/zh_cn/get_started/installation.md

Lines changed: 0 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -28,10 +28,6 @@ export PYTHON_VERSION=310
2828
pip install https://github.com/InternLM/lmdeploy/releases/download/v${LMDEPLOY_VERSION}/lmdeploy-${LMDEPLOY_VERSION}+cu118-cp${PYTHON_VERSION}-cp${PYTHON_VERSION}-manylinux2014_x86_64.whl --extra-index-url https://download.pytorch.org/whl/cu118
2929
```
3030

31-
## 使用 pip 安装夜间构建包
32-
33-
LMDeploy 的发布频率大约是每月一次或两次。如果你所需的功能已经被合并到 LMDeploy 的主分支但还没有发布,你可以环境中的 CUDA 和 Python 版本,尝试使用[这里](https://github.com/zhyncs/lmdeploy-build)提供的夜间构建包。
34-
3531
## 从源码安装
3632

3733
默认情况下,LMDeploy 将面向 NVIDIA CUDA 环境进行编译安装,并同时启用 Turbomind 和 PyTorch 两种后端引擎。在安装 LMDeploy 之前,请确保已成功安装 CUDA 工具包。

0 commit comments

Comments
 (0)