You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: docs/features/nixl_connector_usage.md
+4-4Lines changed: 4 additions & 4 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -9,7 +9,7 @@ NixlConnector is a high-performance KV cache transfer connector for vLLM's disag
9
9
Install the NIXL library: `uv pip install nixl`, as a quick start.
10
10
11
11
- Refer to [NIXL official repository](https://github.com/ai-dynamo/nixl) for more installation instructions
12
-
- The specified required NIXL version can be found in [requirements/kv_connectors.txt](../../requirements/kv_connectors.txt) and other relevant config files
12
+
- The specified required NIXL version can be found in [requirements/kv_connectors.txt](gh-file:requirements/kv_connectors.txt) and other relevant config files
Copy file name to clipboardExpand all lines: docs/models/supported_models.md
+1-1Lines changed: 1 addition & 1 deletion
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -29,7 +29,7 @@ _*Vision-language models currently accept only image inputs. Support for video i
29
29
30
30
If the Transformers model implementation follows all the steps in [writing a custom model](#writing-custom-models) then, when used with the Transformers backend, it will be compatible with the following features of vLLM:
31
31
32
-
- All the features listed in the [compatibility matrix](../features/compatibility_matrix.md#feature-x-feature)
32
+
- All the features listed in the [compatibility matrix](../features/README.md#feature-x-feature)
33
33
- Any combination of the following vLLM parallelisation schemes:
0 commit comments