Skip to content

Commit 047f46d

Browse files
committed
Signed-off-by: Jennifer Zhao <7443418+JenZhao@users.noreply.github.com>
1 parent fcffa3c commit 047f46d

File tree

1 file changed

+4
-7
lines changed

1 file changed

+4
-7
lines changed

docs/source/getting_started/v1_user_guide.md

Lines changed: 4 additions & 7 deletions
Original file line numberDiff line numberDiff line change
@@ -24,8 +24,7 @@ Upgrade to vLLM’s Core Architecture](https://blog.vllm.ai/2025/01/27/v1-alpha-
2424

2525
### Logprobs
2626

27-
vLLM V1 introduces support for returning logprobs and prompt logprobs.
28-
However, there are some important semantic
27+
vLLM V1 supports logprobs and prompt logprobs. However, there are some important semantic
2928
differences compared to V0:
3029

3130
**Prompt Logprobs Without Prefix Caching**
@@ -55,13 +54,11 @@ As part of the major architectural rework in vLLM V1, several legacy features ha
5554

5655
**Deprecated sampling features**
5756

58-
- **best_of**: The sampling parameter best_of—which in V0 enabled
59-
generating multiple candidate outputs per request and then selecting the best
60-
one—has been deprecated in V1.
57+
- **best_of**: See details in this [PR #13361](https://github.com/vllm-project/vllm/issues/13361)
6158
- **Per-Request Logits Processors**: In V0, users could pass custom
6259
processing functions to adjust logits on a per-request basis. In vLLM V1 this
63-
mechanism is deprecated. Instead, the design is moving toward supporting global
64-
logits processors—a feature the team is actively working on for future releases.
60+
is deprecated. Instead, the design is moving toward supporting global logits
61+
processors—a feature the team is actively working on for future releases.
6562

6663
**Deprecated KV Cache features**
6764

0 commit comments

Comments
 (0)