-
-
Notifications
You must be signed in to change notification settings - Fork 8.4k
Implement preemption via recomputation & Refactor scheduling logic #12
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM! Left some small comments.
cacheflow/master/scheduler.py
Outdated
# sequences, we only support swapping. | ||
# TODO(woosuk): Support recomputation for sequence groups with multiple | ||
# sequences. | ||
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Should we add different preemption methods as options? For example, add a preempt_method
function argument and can pick between swapping and recomputation.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I added PreemptionMode
and allowed the caller of _preempt
to specify the mode. If the mode is not specified, we use recomputation for single-output requests and swapping for multi-output requests.
class PolicyFactory: | ||
|
||
_POLICY_REGISTRY = { | ||
'fcfs': FCFS, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Will we add SSF in another PR?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes. In this PR, I tried to make minimal changes.
# No other sequence groups can be swapped out. | ||
if self.running: | ||
# Preempt the lowest-priority sequence groups. | ||
victim_seq_group = self.running.pop(-1) | ||
self._preempt(victim_seq_group, blocks_to_swap_out) | ||
preempted.append(victim_seq_group) | ||
else: | ||
# No other sequence groups can be preempted. | ||
# Preempt the current sequence group. | ||
self._preempt(seq_group, blocks_to_swap_out) | ||
preempted.append(seq_group) | ||
break | ||
else: | ||
# Append new slots to the sequence group. | ||
self._append(seq_group, blocks_to_copy) | ||
self.running = self.running[:victim_idx + 1] | ||
|
||
# 2. Swap in the swapped sequences if possible. | ||
# NOTE: Here we implicitly assume FCFS scheduling. | ||
# The swapped sequences are in LIFO order. | ||
for i, seq_group in enumerate(reversed(self.swapped)): | ||
if self.block_manager.can_swap_in(seq_group): | ||
self._swap_in(seq_group, blocks_to_swap_in) | ||
self._append(seq_group, blocks_to_copy) | ||
else: | ||
# OOM. Stop swapping. | ||
self.swapped = self.swapped[:len(self.swapped) - i] | ||
running.append(seq_group) | ||
self.running = running | ||
|
||
# Swap in the sequence groups in the SWAPPED state if possible. | ||
self.swapped = self.policy.sort_by_priority(now, self.swapped) | ||
while self.swapped: | ||
seq_group = self.swapped[0] | ||
# If the sequence group has been preempted in this step, stop. | ||
if seq_group in preempted: | ||
break | ||
# If the sequence group cannot be swapped in, stop. | ||
if not self.block_manager.can_swap_in(seq_group): | ||
break | ||
else: | ||
# All swapped sequences are swapped in. | ||
self.swapped.clear() | ||
|
||
# Ensure that swap-in and swap-out never happen at the same timestep. | ||
if blocks_to_swap_in: | ||
assert not blocks_to_swap_out | ||
seq_group = self.swapped.pop(0) | ||
self._swap_in(seq_group, blocks_to_swap_in) | ||
self._append(seq_group, blocks_to_copy) | ||
self.running.append(seq_group) | ||
|
||
num_batched_tokens = sum( | ||
seq_group.num_seqs(status=SequenceStatus.RUNNING) | ||
for seq_group in self.running | ||
) | ||
|
||
# 3. Join new sequences if possible. | ||
# NOTE: Here we implicitly assume FCFS scheduling. | ||
# TODO(woosuk): Add a batching policy to control the batch size. | ||
# Join waiting sequences if possible. | ||
prompt_group_ids: List[int] = [] | ||
# NOTE(woosuk): The sequence groups in the SWAPPED state are strictly | ||
# prioritized over the sequence groups in the WAITING state. | ||
# This is because we want to bound the amount of CPU memory taken by | ||
# the swapped sequence groups. | ||
if not self.swapped: | ||
for i, seq_group in enumerate(self.pending): | ||
self.waiting = self.policy.sort_by_priority(now, self.waiting) | ||
while self.waiting: | ||
seq_group = self.waiting[0] | ||
# If the sequence group has been preempted in this step, stop. | ||
if seq_group in preempted: | ||
break | ||
# If the sequence group cannot be allocated, stop. | ||
if not self.block_manager.can_allocate(seq_group): | ||
break | ||
|
||
# If the number of batched tokens exceeds the limit, stop. | ||
num_prompt_tokens = seq_group.seqs[0].get_len() | ||
if self.block_manager.can_allocate(seq_group): | ||
if (num_batched_tokens + num_prompt_tokens | ||
<= self.max_num_batched_tokens): | ||
self._allocate(seq_group) | ||
num_batched_tokens += num_prompt_tokens | ||
continue | ||
|
||
self.pending = self.pending[i:] | ||
break | ||
else: | ||
self.pending.clear() | ||
if (num_batched_tokens + num_prompt_tokens | ||
> self.max_num_batched_tokens): | ||
break | ||
|
||
seq_group = self.waiting.pop(0) | ||
self._allocate(seq_group) | ||
self.running.append(seq_group) | ||
num_batched_tokens += num_prompt_tokens | ||
prompt_group_ids.append(seq_group.group_id) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Maybe move this part to a new function dedicated to swapping and finding which sequences to run?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Good point. I moved the scheduling logic to a new function _schedule
.
@@ -76,7 +76,8 @@ def __init__( | |||
self.block_tables: Dict[int, BlockTable] = {} | |||
|
|||
def can_allocate(self, seq_group: SequenceGroup) -> bool: | |||
# NOTE: Here we assume that all sequences in the group have the same prompt. | |||
# FIXME(woosuk): Here we assume that all sequences in the group share | |||
# the same prompt. This may not be true for preempted sequences. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
If I understand correctly, is this function only wrong when we use recomputation preemption for parallel decoding?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes, and for beam search as well.
Hi @WoosukKwon, if we had a kernel that can do one of the followings
I think we can solve the problem of preempt by recompute for multi-sequence requests. Do you agree with this? We first run the normal prefill on the shared prompt tokens, followed by necessary copying of partially shared blocks. |
Transformers 4.39
* sharded prequantized checkpoints * update --------- Co-authored-by: Hao Zhang <152229491+sfc-gh-hazhang@users.noreply.github.com>
…ble_ROCm6.1 Bump Docker to ROCm 6.1, add gradlib for tuned gemm, include RCCL fixes
Add TP2 config for H100
Summary: Initial integration for the sparse-fused gemm. To achieve this, we need to ensure that we compress the weight matrix only once and never decompress it, as decompression is currently unsupported. Before this change, using `SparseParameter(SparseTensor)` meant that in `MergedColumnParallelLinear` and `QKVParallelLinear` every time a new shard was loaded by the `weight_loader` (e.g., the "q" portion of `QKVParallelLinear`), we would decompress the tensor in-order to use narrow to update the appropriate section of the weight tensor. With this change, `SparseParameter(SparseTensor)` is replaced with `LazyCompressedParameter`, which allows us to operate on `uncompressed_data` until we explicitly compress it. At that point, the `uncompressed_data` is compressed into `compressed_data` and freed. Currently, the detection of when to call compress is somewhat hacky. For `QKVParallelLinear`, we compress only after inserting "q", "k", and "v" shard ids, and for `MergedColumnParallelLinear`, we compress once we've inserted the same number of shards as outputs (determined by `len(output_sizes)`), which implicitly assumes one shard per output. Moving away from `SparseParameter(SparseTensor)` means that `SparseTensor` no longer handles dispatching to the custom ops; instead, this is handled by `SparseW16A16LinearMethod`. I believe this is a positive change overall. `SparseTensor` was an unnecessary extra layer of abstraction/indirection originally designed for the SLoRA work, not vLLM. This did result in the 2:4 sparse implementation breaking. However, it turns out it was already broken (i.e., it was decompressing and running dense within `SparseTensor`), so we "disable" it for now ("disable" meaning decompress and run dense instead). We should revisit all of this infrastructure post-MVP. --------- Co-authored-by: Andrew Feldman <afeldman@neuralmagic.com>
Summary: Initial integration for the sparse-fused gemm. To achieve this, we need to ensure that we compress the weight matrix only once and never decompress it, as decompression is currently unsupported. Before this change, using `SparseParameter(SparseTensor)` meant that in `MergedColumnParallelLinear` and `QKVParallelLinear` every time a new shard was loaded by the `weight_loader` (e.g., the "q" portion of `QKVParallelLinear`), we would decompress the tensor in-order to use narrow to update the appropriate section of the weight tensor. With this change, `SparseParameter(SparseTensor)` is replaced with `LazyCompressedParameter`, which allows us to operate on `uncompressed_data` until we explicitly compress it. At that point, the `uncompressed_data` is compressed into `compressed_data` and freed. Currently, the detection of when to call compress is somewhat hacky. For `QKVParallelLinear`, we compress only after inserting "q", "k", and "v" shard ids, and for `MergedColumnParallelLinear`, we compress once we've inserted the same number of shards as outputs (determined by `len(output_sizes)`), which implicitly assumes one shard per output. Moving away from `SparseParameter(SparseTensor)` means that `SparseTensor` no longer handles dispatching to the custom ops; instead, this is handled by `SparseW16A16LinearMethod`. I believe this is a positive change overall. `SparseTensor` was an unnecessary extra layer of abstraction/indirection originally designed for the SLoRA work, not vLLM. This did result in the 2:4 sparse implementation breaking. However, it turns out it was already broken (i.e., it was decompressing and running dense within `SparseTensor`), so we "disable" it for now ("disable" meaning decompress and run dense instead). We should revisit all of this infrastructure post-MVP. --------- Co-authored-by: Andrew Feldman <afeldman@neuralmagic.com>
…x-633313fb5af9953589a88bc244a2a983 [Snyk] Security upgrade starlette from 0.38.6 to 0.40.0
…ctx-fix fixed phi3longrope rotary dim
…_version Update Habana UBI image to fix CVE, GRPC issue and WARMUP issue
* [Update] LMcache connector v1 implementation Signed-off-by: ApostaC <yihua98@uchicago.edu> * [Add] examples for disaggregated prefill Signed-off-by: ApostaC <yihua98@uchicago.edu> * [add] extra information about evns Signed-off-by: ApostaC <yihua98@uchicago.edu> * Initial stubs for P/D scheduling changes Signed-off-by: Tyler Michael Smith <tyler@neuralmagic.com> * Updates Signed-off-by: Tyler Michael Smith <tyler@neuralmagic.com> * Rs branch (#3) * updated Signed-off-by: rshaw@neuralmagic.com <robertgshaw2@gmail.com> * Rs branch (#5) Signed-off-by: rshaw@neuralmagic.com <robertgshaw2@gmail.com> * Remove Unneeded Arguments (#7) * updated Signed-off-by: rshaw@neuralmagic.com <robertgshaw2@gmail.com> * stash Signed-off-by: rshaw@neuralmagic.com <robertgshaw2@gmail.com> * cleanup Signed-off-by: rshaw@neuralmagic.com <robertgshaw2@gmail.com> --------- Signed-off-by: rshaw@neuralmagic.com <robertgshaw2@gmail.com> * Improve disagg-example.sh (#8) - fix spelling - CUDA_VISIBLE_DEVICES should be set externally Signed-off-by: Tyler Michael Smith <tyler@neuralmagic.com> * updated Signed-off-by: rshaw@neuralmagic.com <robertgshaw2@gmail.com> * updated Signed-off-by: rshaw@neuralmagic.com <robertgshaw2@gmail.com> * updated Signed-off-by: rshaw@neuralmagic.com <robertgshaw2@gmail.com> * updated Signed-off-by: rshaw@neuralmagic.com <robertgshaw2@gmail.com> * added connector Signed-off-by: rshaw@neuralmagic.com <robertgshaw2@gmail.com> * updated Signed-off-by: rshaw@neuralmagic.com <robertgshaw2@gmail.com> * updated Signed-off-by: rshaw@neuralmagic.com <robertgshaw2@gmail.com> * updated Signed-off-by: rshaw@neuralmagic.com <robertgshaw2@gmail.com> * updated Signed-off-by: rshaw@neuralmagic.com <robertgshaw2@gmail.com> * updated Signed-off-by: rshaw@neuralmagic.com <robertgshaw2@gmail.com> * updated Signed-off-by: rshaw@neuralmagic.com <robertgshaw2@gmail.com> * updated Signed-off-by: rshaw@neuralmagic.com <robertgshaw2@gmail.com> * updated Signed-off-by: rshaw@neuralmagic.com <robertgshaw2@gmail.com> * updated Signed-off-by: rshaw@neuralmagic.com <robertgshaw2@gmail.com> * updated Signed-off-by: rshaw@neuralmagic.com <robertgshaw2@gmail.com> * updated Signed-off-by: rshaw@neuralmagic.com <robertgshaw2@gmail.com> * updated Signed-off-by: rshaw@neuralmagic.com <robertgshaw2@gmail.com> * updated Signed-off-by: rshaw@neuralmagic.com <robertgshaw2@gmail.com> * updated Signed-off-by: rshaw@neuralmagic.com <robertgshaw2@gmail.com> * updated Signed-off-by: rshaw@neuralmagic.com <robertgshaw2@gmail.com> * updated Signed-off-by: rshaw@neuralmagic.com <robertgshaw2@gmail.com> * update Signed-off-by: rshaw@neuralmagic.com <robertgshaw2@gmail.com> * remove Signed-off-by: rshaw@neuralmagic.com <robertgshaw2@gmail.com> * updated Signed-off-by: rshaw@neuralmagic.com <robertgshaw2@gmail.com> * updated Signed-off-by: rshaw@neuralmagic.com <robertgshaw2@gmail.com> * updated Signed-off-by: rshaw@neuralmagic.com <robertgshaw2@gmail.com> * updated Signed-off-by: rshaw@neuralmagic.com <robertgshaw2@gmail.com> * updated Signed-off-by: rshaw@neuralmagic.com <robertgshaw2@gmail.com> * seems to load properly Signed-off-by: rshaw@neuralmagic.com <robertgshaw2@gmail.com> * updated Signed-off-by: rshaw@neuralmagic.com <robertgshaw2@gmail.com> * updated Signed-off-by: rshaw@neuralmagic.com <robertgshaw2@gmail.com> * updated Signed-off-by: rshaw@neuralmagic.com <robertgshaw2@gmail.com> * updated Signed-off-by: rshaw@neuralmagic.com <robertgshaw2@gmail.com> * updated Signed-off-by: rshaw@neuralmagic.com <robertgshaw2@gmail.com> * updated Signed-off-by: rshaw@neuralmagic.com <robertgshaw2@gmail.com> * Revert "updated" This reverts commit 97316d9. * updated Signed-off-by: rshaw@neuralmagic.com <robertgshaw2@gmail.com> * updated Signed-off-by: rshaw@neuralmagic.com <robertgshaw2@gmail.com> * updated Signed-off-by: rshaw@neuralmagic.com <robertgshaw2@gmail.com> * updated Signed-off-by: rshaw@neuralmagic.com <robertgshaw2@gmail.com> * updated Signed-off-by: rshaw@neuralmagic.com <robertgshaw2@gmail.com> * stash Signed-off-by: rshaw@neuralmagic.com <robertgshaw2@gmail.com> * added Signed-off-by: rshaw@neuralmagic.com <robertgshaw2@gmail.com> * diffs for local dev on macos Signed-off-by: Robert Shaw <rshaw@neuralmagic.com> * updated Signed-off-by: Robert Shaw <rshaw@neuralmagic.com> * update Signed-off-by: Robert Shaw <rshaw@neuralmagic.com> * updaed Signed-off-by: Robert Shaw <rshaw@neuralmagic.com> * updated Signed-off-by: Robert Shaw <rshaw@neuralmagic.com> * updated Signed-off-by: Robert Shaw <rshaw@neuralmagic.com> * Checkpoint. Signed-off-by: Tyler Michael Smith <tyler@neuralmagic.com> * updated Signed-off-by: Robert Shaw <rshaw@neuralmagic.com> * Cleanup Signed-off-by: Tyler Michael Smith <tyler@neuralmagic.com> * WIP Signed-off-by: Tyler Michael Smith <tyler@neuralmagic.com> * updated Signed-off-by: Robert Shaw <rshaw@neuralmagic.com> * updated Signed-off-by: Robert Shaw <rshaw@neuralmagic.com> * updated on scheduler side Signed-off-by: Robert Shaw <rshaw@neuralmagic.com> * updated Signed-off-by: Robert Shaw <rshaw@neuralmagic.com> * updated Signed-off-by: Robert Shaw <rshaw@neuralmagic.com> * updated Signed-off-by: Robert Shaw <rshaw@neuralmagic.com> * updated Signed-off-by: Robert Shaw <rshaw@neuralmagic.com> * updated Signed-off-by: Robert Shaw <rshaw@neuralmagic.com> * updated Signed-off-by: Robert Shaw <rshaw@neuralmagic.com> * Hacking away Signed-off-by: Tyler Michael Smith <tyler@neuralmagic.com> * cleanup Signed-off-by: Robert Shaw <rshaw@neuralmagic.com> * ensure request removed from running list Signed-off-by: Robert Shaw <rshaw@neuralmagic.com> * Runs E2E. Garbage output. Crashes on 2nd request Signed-off-by: Tyler Michael Smith <tyler@neuralmagic.com> * update Signed-off-by: Tyler Michael Smith <tyler@neuralmagic.com> * updated Signed-off-by: Robert Shaw <rshaw@neuralmagic.com> * updated Signed-off-by: Robert Shaw <rshaw@neuralmagic.com> * rename files Signed-off-by: Robert Shaw <rshaw@neuralmagic.com> * updated Signed-off-by: Robert Shaw <rshaw@neuralmagic.com> * updated Signed-off-by: Robert Shaw <rshaw@neuralmagic.com> * updated Signed-off-by: Robert Shaw <rshaw@neuralmagic.com> * updated Signed-off-by: Robert Shaw <rshaw@neuralmagic.com> * updated Signed-off-by: Robert Shaw <rshaw@neuralmagic.com> * update Signed-off-by: Robert Shaw <rshaw@neuralmagic.com> * Second request no longer crashes Signed-off-by: Tyler Michael Smith <tyler@neuralmagic.com> * Remove gpu_model_runner hacks Signed-off-by: Tyler Michael Smith <tyler@neuralmagic.com> * Clean up Justfile Signed-off-by: Tyler Michael Smith <tyler@neuralmagic.com> * [Bugfix] Stale finished requests in EMPTY_MODEL_RUNNER_OUTPUT Signed-off-by: Tyler Michael Smith <tyler@neuralmagic.com> * update Signed-off-by: Tyler Michael Smith <tyler@neuralmagic.com> * justfile edits Signed-off-by: Tyler Michael Smith <tyler@neuralmagic.com> * Update Signed-off-by: Tyler Michael Smith <tyler@neuralmagic.com> * Fixes - lm_eval gsm8k has correctness Signed-off-by: Tyler Michael Smith <tyler@neuralmagic.com> * "just delete the assert" Signed-off-by: Tyler Michael Smith <tyler@neuralmagic.com> * fixup precommit issues Signed-off-by: Tyler Michael Smith <tyler@neuralmagic.com> * Fixes Signed-off-by: Tyler Michael Smith <tyler@neuralmagic.com> * updated (#12) Signed-off-by: rshaw@neuralmagic.com <robertgshaw2@gmail.com> * Add Accuracy Test (#13) * updated Signed-off-by: rshaw@neuralmagic.com <robertgshaw2@gmail.com> * updated Signed-off-by: rshaw@neuralmagic.com <robertgshaw2@gmail.com> * updated Signed-off-by: rshaw@neuralmagic.com <robertgshaw2@gmail.com> * updated Signed-off-by: rshaw@neuralmagic.com <robertgshaw2@gmail.com> --------- Signed-off-by: rshaw@neuralmagic.com <robertgshaw2@gmail.com> * Preemption Bugfixes (#15) * stash fixed double free issue Signed-off-by: rshaw@neuralmagic.com <robertgshaw2@gmail.com> * updated Signed-off-by: rshaw@neuralmagic.com <robertgshaw2@gmail.com> * updated Signed-off-by: rshaw@neuralmagic.com <robertgshaw2@gmail.com> * updated Signed-off-by: rshaw@neuralmagic.com <robertgshaw2@gmail.com> * updated Signed-off-by: rshaw@neuralmagic.com <robertgshaw2@gmail.com> * updated Signed-off-by: rshaw@neuralmagic.com <robertgshaw2@gmail.com> * updated Signed-off-by: rshaw@neuralmagic.com <robertgshaw2@gmail.com> * fixed issue Signed-off-by: rshaw@neuralmagic.com <robertgshaw2@gmail.com> * updated Signed-off-by: rshaw@neuralmagic.com <robertgshaw2@gmail.com> * updatrd Signed-off-by: rshaw@neuralmagic.com <robertgshaw2@gmail.com> * updatrd Signed-off-by: rshaw@neuralmagic.com <robertgshaw2@gmail.com> * updatrd Signed-off-by: rshaw@neuralmagic.com <robertgshaw2@gmail.com> * updatrd Signed-off-by: rshaw@neuralmagic.com <robertgshaw2@gmail.com> * updatrd Signed-off-by: rshaw@neuralmagic.com <robertgshaw2@gmail.com> * updatrd Signed-off-by: rshaw@neuralmagic.com <robertgshaw2@gmail.com> --------- Signed-off-by: rshaw@neuralmagic.com <robertgshaw2@gmail.com> * updated (#16) Signed-off-by: rshaw@neuralmagic.com <robertgshaw2@gmail.com> * Fix Bad Merge | Fix Memory Leak in Upstream (vllm-project#18) * updated Signed-off-by: rshaw@neuralmagic.com <robertgshaw2@gmail.com> * fix merge Signed-off-by: rshaw@neuralmagic.com <robertgshaw2@gmail.com> * updated Signed-off-by: rshaw@neuralmagic.com <robertgshaw2@gmail.com> * updated Signed-off-by: rshaw@neuralmagic.com <robertgshaw2@gmail.com> * updated Signed-off-by: rshaw@neuralmagic.com <robertgshaw2@gmail.com> * updated Signed-off-by: rshaw@neuralmagic.com <robertgshaw2@gmail.com> --------- Signed-off-by: rshaw@neuralmagic.com <robertgshaw2@gmail.com> * clean up justfile, examples Signed-off-by: Tyler Michael Smith <tyler@neuralmagic.com> * more cleanup Signed-off-by: Tyler Michael Smith <tyler@neuralmagic.com> * more cleanup Signed-off-by: Tyler Michael Smith <tyler@neuralmagic.com> * more cleanup Signed-off-by: Tyler Michael Smith <tyler@neuralmagic.com> * more cleanup Signed-off-by: Tyler Michael Smith <tyler@neuralmagic.com> * More cleanup Signed-off-by: Tyler Michael Smith <tyler@neuralmagic.com> * more cleanup Signed-off-by: Tyler Michael Smith <tyler@neuralmagic.com> * more cleanup, precommit fixes Signed-off-by: Tyler Michael Smith <tyler@neuralmagic.com> * More cleanup Signed-off-by: Tyler Michael Smith <tyler@neuralmagic.com> * run_accuracy_test.sh UX Signed-off-by: Tyler Michael Smith <tyler@neuralmagic.com> * squash warnings Signed-off-by: Tyler Michael Smith <tyler@neuralmagic.com> * pre-commit Signed-off-by: Tyler Michael Smith <tyler@neuralmagic.com> * update Signed-off-by: Tyler Michael Smith <tyler@neuralmagic.com> * Add get_finished to base kv connector Signed-off-by: mgoin <mgoin64@gmail.com> * revert test.txt Signed-off-by: Tyler Michael Smith <tyler@neuralmagic.com> * cleanup Signed-off-by: Tyler Michael Smith <tyler@neuralmagic.com> * Cleanup Signed-off-by: Tyler Michael Smith <tyler@neuralmagic.com> * review comments Signed-off-by: Tyler Michael Smith <tyler@neuralmagic.com> --------- Signed-off-by: ApostaC <yihua98@uchicago.edu> Signed-off-by: Tyler Michael Smith <tyler@neuralmagic.com> Signed-off-by: rshaw@neuralmagic.com <robertgshaw2@gmail.com> Signed-off-by: Robert Shaw <rshaw@neuralmagic.com> Signed-off-by: mgoin <mgoin64@gmail.com> Co-authored-by: ApostaC <yihua98@uchicago.edu> Co-authored-by: Robert Shaw <114415538+robertgshaw2-redhat@users.noreply.github.com> Co-authored-by: rshaw@neuralmagic.com <robertgshaw2@gmail.com> Co-authored-by: Robert Shaw <rshaw@neuralmagic.com> Co-authored-by: mgoin <mgoin64@gmail.com> Co-authored-by: mgoin <mgoin64@gmail.com>
* [Update] LMcache connector v1 implementation Signed-off-by: ApostaC <yihua98@uchicago.edu> * [Add] examples for disaggregated prefill Signed-off-by: ApostaC <yihua98@uchicago.edu> * [add] extra information about evns Signed-off-by: ApostaC <yihua98@uchicago.edu> * Initial stubs for P/D scheduling changes Signed-off-by: Tyler Michael Smith <tyler@neuralmagic.com> * Updates Signed-off-by: Tyler Michael Smith <tyler@neuralmagic.com> * Rs branch (#3) * updated Signed-off-by: rshaw@neuralmagic.com <robertgshaw2@gmail.com> * Rs branch (#5) Signed-off-by: rshaw@neuralmagic.com <robertgshaw2@gmail.com> * Remove Unneeded Arguments (#7) * updated Signed-off-by: rshaw@neuralmagic.com <robertgshaw2@gmail.com> * stash Signed-off-by: rshaw@neuralmagic.com <robertgshaw2@gmail.com> * cleanup Signed-off-by: rshaw@neuralmagic.com <robertgshaw2@gmail.com> --------- Signed-off-by: rshaw@neuralmagic.com <robertgshaw2@gmail.com> * Improve disagg-example.sh (#8) - fix spelling - CUDA_VISIBLE_DEVICES should be set externally Signed-off-by: Tyler Michael Smith <tyler@neuralmagic.com> * updated Signed-off-by: rshaw@neuralmagic.com <robertgshaw2@gmail.com> * updated Signed-off-by: rshaw@neuralmagic.com <robertgshaw2@gmail.com> * updated Signed-off-by: rshaw@neuralmagic.com <robertgshaw2@gmail.com> * updated Signed-off-by: rshaw@neuralmagic.com <robertgshaw2@gmail.com> * added connector Signed-off-by: rshaw@neuralmagic.com <robertgshaw2@gmail.com> * updated Signed-off-by: rshaw@neuralmagic.com <robertgshaw2@gmail.com> * updated Signed-off-by: rshaw@neuralmagic.com <robertgshaw2@gmail.com> * updated Signed-off-by: rshaw@neuralmagic.com <robertgshaw2@gmail.com> * updated Signed-off-by: rshaw@neuralmagic.com <robertgshaw2@gmail.com> * updated Signed-off-by: rshaw@neuralmagic.com <robertgshaw2@gmail.com> * updated Signed-off-by: rshaw@neuralmagic.com <robertgshaw2@gmail.com> * updated Signed-off-by: rshaw@neuralmagic.com <robertgshaw2@gmail.com> * updated Signed-off-by: rshaw@neuralmagic.com <robertgshaw2@gmail.com> * updated Signed-off-by: rshaw@neuralmagic.com <robertgshaw2@gmail.com> * updated Signed-off-by: rshaw@neuralmagic.com <robertgshaw2@gmail.com> * updated Signed-off-by: rshaw@neuralmagic.com <robertgshaw2@gmail.com> * updated Signed-off-by: rshaw@neuralmagic.com <robertgshaw2@gmail.com> * updated Signed-off-by: rshaw@neuralmagic.com <robertgshaw2@gmail.com> * updated Signed-off-by: rshaw@neuralmagic.com <robertgshaw2@gmail.com> * updated Signed-off-by: rshaw@neuralmagic.com <robertgshaw2@gmail.com> * updated Signed-off-by: rshaw@neuralmagic.com <robertgshaw2@gmail.com> * update Signed-off-by: rshaw@neuralmagic.com <robertgshaw2@gmail.com> * remove Signed-off-by: rshaw@neuralmagic.com <robertgshaw2@gmail.com> * updated Signed-off-by: rshaw@neuralmagic.com <robertgshaw2@gmail.com> * updated Signed-off-by: rshaw@neuralmagic.com <robertgshaw2@gmail.com> * updated Signed-off-by: rshaw@neuralmagic.com <robertgshaw2@gmail.com> * updated Signed-off-by: rshaw@neuralmagic.com <robertgshaw2@gmail.com> * updated Signed-off-by: rshaw@neuralmagic.com <robertgshaw2@gmail.com> * seems to load properly Signed-off-by: rshaw@neuralmagic.com <robertgshaw2@gmail.com> * updated Signed-off-by: rshaw@neuralmagic.com <robertgshaw2@gmail.com> * updated Signed-off-by: rshaw@neuralmagic.com <robertgshaw2@gmail.com> * updated Signed-off-by: rshaw@neuralmagic.com <robertgshaw2@gmail.com> * updated Signed-off-by: rshaw@neuralmagic.com <robertgshaw2@gmail.com> * updated Signed-off-by: rshaw@neuralmagic.com <robertgshaw2@gmail.com> * updated Signed-off-by: rshaw@neuralmagic.com <robertgshaw2@gmail.com> * Revert "updated" This reverts commit 97316d9. * updated Signed-off-by: rshaw@neuralmagic.com <robertgshaw2@gmail.com> * updated Signed-off-by: rshaw@neuralmagic.com <robertgshaw2@gmail.com> * updated Signed-off-by: rshaw@neuralmagic.com <robertgshaw2@gmail.com> * updated Signed-off-by: rshaw@neuralmagic.com <robertgshaw2@gmail.com> * updated Signed-off-by: rshaw@neuralmagic.com <robertgshaw2@gmail.com> * stash Signed-off-by: rshaw@neuralmagic.com <robertgshaw2@gmail.com> * added Signed-off-by: rshaw@neuralmagic.com <robertgshaw2@gmail.com> * diffs for local dev on macos Signed-off-by: Robert Shaw <rshaw@neuralmagic.com> * updated Signed-off-by: Robert Shaw <rshaw@neuralmagic.com> * update Signed-off-by: Robert Shaw <rshaw@neuralmagic.com> * updaed Signed-off-by: Robert Shaw <rshaw@neuralmagic.com> * updated Signed-off-by: Robert Shaw <rshaw@neuralmagic.com> * updated Signed-off-by: Robert Shaw <rshaw@neuralmagic.com> * Checkpoint. Signed-off-by: Tyler Michael Smith <tyler@neuralmagic.com> * updated Signed-off-by: Robert Shaw <rshaw@neuralmagic.com> * Cleanup Signed-off-by: Tyler Michael Smith <tyler@neuralmagic.com> * WIP Signed-off-by: Tyler Michael Smith <tyler@neuralmagic.com> * updated Signed-off-by: Robert Shaw <rshaw@neuralmagic.com> * updated Signed-off-by: Robert Shaw <rshaw@neuralmagic.com> * updated on scheduler side Signed-off-by: Robert Shaw <rshaw@neuralmagic.com> * updated Signed-off-by: Robert Shaw <rshaw@neuralmagic.com> * updated Signed-off-by: Robert Shaw <rshaw@neuralmagic.com> * updated Signed-off-by: Robert Shaw <rshaw@neuralmagic.com> * updated Signed-off-by: Robert Shaw <rshaw@neuralmagic.com> * updated Signed-off-by: Robert Shaw <rshaw@neuralmagic.com> * updated Signed-off-by: Robert Shaw <rshaw@neuralmagic.com> * Hacking away Signed-off-by: Tyler Michael Smith <tyler@neuralmagic.com> * cleanup Signed-off-by: Robert Shaw <rshaw@neuralmagic.com> * ensure request removed from running list Signed-off-by: Robert Shaw <rshaw@neuralmagic.com> * Runs E2E. Garbage output. Crashes on 2nd request Signed-off-by: Tyler Michael Smith <tyler@neuralmagic.com> * update Signed-off-by: Tyler Michael Smith <tyler@neuralmagic.com> * updated Signed-off-by: Robert Shaw <rshaw@neuralmagic.com> * updated Signed-off-by: Robert Shaw <rshaw@neuralmagic.com> * rename files Signed-off-by: Robert Shaw <rshaw@neuralmagic.com> * updated Signed-off-by: Robert Shaw <rshaw@neuralmagic.com> * updated Signed-off-by: Robert Shaw <rshaw@neuralmagic.com> * updated Signed-off-by: Robert Shaw <rshaw@neuralmagic.com> * updated Signed-off-by: Robert Shaw <rshaw@neuralmagic.com> * updated Signed-off-by: Robert Shaw <rshaw@neuralmagic.com> * update Signed-off-by: Robert Shaw <rshaw@neuralmagic.com> * Second request no longer crashes Signed-off-by: Tyler Michael Smith <tyler@neuralmagic.com> * Remove gpu_model_runner hacks Signed-off-by: Tyler Michael Smith <tyler@neuralmagic.com> * Clean up Justfile Signed-off-by: Tyler Michael Smith <tyler@neuralmagic.com> * [Bugfix] Stale finished requests in EMPTY_MODEL_RUNNER_OUTPUT Signed-off-by: Tyler Michael Smith <tyler@neuralmagic.com> * update Signed-off-by: Tyler Michael Smith <tyler@neuralmagic.com> * justfile edits Signed-off-by: Tyler Michael Smith <tyler@neuralmagic.com> * Update Signed-off-by: Tyler Michael Smith <tyler@neuralmagic.com> * Fixes - lm_eval gsm8k has correctness Signed-off-by: Tyler Michael Smith <tyler@neuralmagic.com> * "just delete the assert" Signed-off-by: Tyler Michael Smith <tyler@neuralmagic.com> * fixup precommit issues Signed-off-by: Tyler Michael Smith <tyler@neuralmagic.com> * Fixes Signed-off-by: Tyler Michael Smith <tyler@neuralmagic.com> * updated (#12) Signed-off-by: rshaw@neuralmagic.com <robertgshaw2@gmail.com> * Add Accuracy Test (#13) * updated Signed-off-by: rshaw@neuralmagic.com <robertgshaw2@gmail.com> * updated Signed-off-by: rshaw@neuralmagic.com <robertgshaw2@gmail.com> * updated Signed-off-by: rshaw@neuralmagic.com <robertgshaw2@gmail.com> * updated Signed-off-by: rshaw@neuralmagic.com <robertgshaw2@gmail.com> --------- Signed-off-by: rshaw@neuralmagic.com <robertgshaw2@gmail.com> * Preemption Bugfixes (#15) * stash fixed double free issue Signed-off-by: rshaw@neuralmagic.com <robertgshaw2@gmail.com> * updated Signed-off-by: rshaw@neuralmagic.com <robertgshaw2@gmail.com> * updated Signed-off-by: rshaw@neuralmagic.com <robertgshaw2@gmail.com> * updated Signed-off-by: rshaw@neuralmagic.com <robertgshaw2@gmail.com> * updated Signed-off-by: rshaw@neuralmagic.com <robertgshaw2@gmail.com> * updated Signed-off-by: rshaw@neuralmagic.com <robertgshaw2@gmail.com> * updated Signed-off-by: rshaw@neuralmagic.com <robertgshaw2@gmail.com> * fixed issue Signed-off-by: rshaw@neuralmagic.com <robertgshaw2@gmail.com> * updated Signed-off-by: rshaw@neuralmagic.com <robertgshaw2@gmail.com> * updatrd Signed-off-by: rshaw@neuralmagic.com <robertgshaw2@gmail.com> * updatrd Signed-off-by: rshaw@neuralmagic.com <robertgshaw2@gmail.com> * updatrd Signed-off-by: rshaw@neuralmagic.com <robertgshaw2@gmail.com> * updatrd Signed-off-by: rshaw@neuralmagic.com <robertgshaw2@gmail.com> * updatrd Signed-off-by: rshaw@neuralmagic.com <robertgshaw2@gmail.com> * updatrd Signed-off-by: rshaw@neuralmagic.com <robertgshaw2@gmail.com> --------- Signed-off-by: rshaw@neuralmagic.com <robertgshaw2@gmail.com> * updated (#16) Signed-off-by: rshaw@neuralmagic.com <robertgshaw2@gmail.com> * Fix Bad Merge | Fix Memory Leak in Upstream (vllm-project#18) * updated Signed-off-by: rshaw@neuralmagic.com <robertgshaw2@gmail.com> * fix merge Signed-off-by: rshaw@neuralmagic.com <robertgshaw2@gmail.com> * updated Signed-off-by: rshaw@neuralmagic.com <robertgshaw2@gmail.com> * updated Signed-off-by: rshaw@neuralmagic.com <robertgshaw2@gmail.com> * updated Signed-off-by: rshaw@neuralmagic.com <robertgshaw2@gmail.com> * updated Signed-off-by: rshaw@neuralmagic.com <robertgshaw2@gmail.com> --------- Signed-off-by: rshaw@neuralmagic.com <robertgshaw2@gmail.com> * updated Signed-off-by: rshaw@neuralmagic.com <robertgshaw2@gmail.com> * cleanup code Signed-off-by: rshaw@neuralmagic.com <robertgshaw2@gmail.com> * cleanup code Signed-off-by: rshaw@neuralmagic.com <robertgshaw2@gmail.com> * updated Signed-off-by: rshaw@neuralmagic.com <robertgshaw2@gmail.com> * updated Signed-off-by: rshaw@neuralmagic.com <robertgshaw2@gmail.com> * updated Signed-off-by: rshaw@neuralmagic.com <robertgshaw2@gmail.com> * stash Signed-off-by: rshaw@neuralmagic.com <robertgshaw2@gmail.com> * updated Signed-off-by: rshaw@neuralmagic.com <robertgshaw2@gmail.com> * updated Signed-off-by: rshaw@neuralmagic.com <robertgshaw2@gmail.com> * updatted Signed-off-by: rshaw@neuralmagic.com <robertgshaw2@gmail.com> * updated Signed-off-by: rshaw@neuralmagic.com <robertgshaw2@gmail.com> * updated Signed-off-by: rshaw@neuralmagic.com <robertgshaw2@gmail.com> * revert Signed-off-by: rshaw@neuralmagic.com <robertgshaw2@gmail.com> * more spurious changes Signed-off-by: rshaw@neuralmagic.com <robertgshaw2@gmail.com> * updated Signed-off-by: rshaw@neuralmagic.com <robertgshaw2@gmail.com> * updated Signed-off-by: rshaw@neuralmagic.com <robertgshaw2@gmail.com> * updated Signed-off-by: rshaw@neuralmagic.com <robertgshaw2@gmail.com> * updated Signed-off-by: rshaw@neuralmagic.com <robertgshaw2@gmail.com> * updated Signed-off-by: rshaw@neuralmagic.com <robertgshaw2@gmail.com> * updated Signed-off-by: rshaw@neuralmagic.com <robertgshaw2@gmail.com> * updated Signed-off-by: rshaw@neuralmagic.com <robertgshaw2@gmail.com> * updated Signed-off-by: rshaw@neuralmagic.com <robertgshaw2@gmail.com> * updated Signed-off-by: rshaw@neuralmagic.com <robertgshaw2@gmail.com> * updated Signed-off-by: rshaw@neuralmagic.com <robertgshaw2@gmail.com> * updated Signed-off-by: rshaw@neuralmagic.com <robertgshaw2@gmail.com> * updated Signed-off-by: rshaw@neuralmagic.com <robertgshaw2@gmail.com> * updated Signed-off-by: rshaw@neuralmagic.com <robertgshaw2@gmail.com> * updated Signed-off-by: rshaw@neuralmagic.com <robertgshaw2@gmail.com> * updated Signed-off-by: rshaw@neuralmagic.com <robertgshaw2@gmail.com> * updated Signed-off-by: rshaw@neuralmagic.com <robertgshaw2@gmail.com> * updated Signed-off-by: rshaw@neuralmagic.com <robertgshaw2@gmail.com> * updated Signed-off-by: rshaw@neuralmagic.com <robertgshaw2@gmail.com> * Update vllm/distributed/kv_transfer/kv_connector/v1/nixl_connector.py Co-authored-by: Tyler Michael Smith <tysmith@redhat.com> * Update vllm/distributed/kv_transfer/kv_connector/v1/nixl_connector.py Co-authored-by: Tyler Michael Smith <tysmith@redhat.com> --------- Signed-off-by: ApostaC <yihua98@uchicago.edu> Signed-off-by: Tyler Michael Smith <tyler@neuralmagic.com> Signed-off-by: rshaw@neuralmagic.com <robertgshaw2@gmail.com> Signed-off-by: Robert Shaw <rshaw@neuralmagic.com> Co-authored-by: ApostaC <yihua98@uchicago.edu> Co-authored-by: Tyler Michael Smith <tyler@neuralmagic.com> Co-authored-by: Tyler Michael Smith <tysmith@redhat.com> Co-authored-by: Robert Shaw <rshaw@neuralmagic.com>
* [Update] LMcache connector v1 implementation Signed-off-by: ApostaC <yihua98@uchicago.edu> * [Add] examples for disaggregated prefill Signed-off-by: ApostaC <yihua98@uchicago.edu> * [add] extra information about evns Signed-off-by: ApostaC <yihua98@uchicago.edu> * Initial stubs for P/D scheduling changes Signed-off-by: Tyler Michael Smith <tyler@neuralmagic.com> * Updates Signed-off-by: Tyler Michael Smith <tyler@neuralmagic.com> * Rs branch (#3) * updated Signed-off-by: rshaw@neuralmagic.com <robertgshaw2@gmail.com> * Rs branch (#5) Signed-off-by: rshaw@neuralmagic.com <robertgshaw2@gmail.com> * Remove Unneeded Arguments (#7) * updated Signed-off-by: rshaw@neuralmagic.com <robertgshaw2@gmail.com> * stash Signed-off-by: rshaw@neuralmagic.com <robertgshaw2@gmail.com> * cleanup Signed-off-by: rshaw@neuralmagic.com <robertgshaw2@gmail.com> --------- Signed-off-by: rshaw@neuralmagic.com <robertgshaw2@gmail.com> * Improve disagg-example.sh (#8) - fix spelling - CUDA_VISIBLE_DEVICES should be set externally Signed-off-by: Tyler Michael Smith <tyler@neuralmagic.com> * updated Signed-off-by: rshaw@neuralmagic.com <robertgshaw2@gmail.com> * updated Signed-off-by: rshaw@neuralmagic.com <robertgshaw2@gmail.com> * updated Signed-off-by: rshaw@neuralmagic.com <robertgshaw2@gmail.com> * updated Signed-off-by: rshaw@neuralmagic.com <robertgshaw2@gmail.com> * added connector Signed-off-by: rshaw@neuralmagic.com <robertgshaw2@gmail.com> * updated Signed-off-by: rshaw@neuralmagic.com <robertgshaw2@gmail.com> * updated Signed-off-by: rshaw@neuralmagic.com <robertgshaw2@gmail.com> * updated Signed-off-by: rshaw@neuralmagic.com <robertgshaw2@gmail.com> * updated Signed-off-by: rshaw@neuralmagic.com <robertgshaw2@gmail.com> * updated Signed-off-by: rshaw@neuralmagic.com <robertgshaw2@gmail.com> * updated Signed-off-by: rshaw@neuralmagic.com <robertgshaw2@gmail.com> * updated Signed-off-by: rshaw@neuralmagic.com <robertgshaw2@gmail.com> * updated Signed-off-by: rshaw@neuralmagic.com <robertgshaw2@gmail.com> * updated Signed-off-by: rshaw@neuralmagic.com <robertgshaw2@gmail.com> * updated Signed-off-by: rshaw@neuralmagic.com <robertgshaw2@gmail.com> * updated Signed-off-by: rshaw@neuralmagic.com <robertgshaw2@gmail.com> * updated Signed-off-by: rshaw@neuralmagic.com <robertgshaw2@gmail.com> * updated Signed-off-by: rshaw@neuralmagic.com <robertgshaw2@gmail.com> * updated Signed-off-by: rshaw@neuralmagic.com <robertgshaw2@gmail.com> * updated Signed-off-by: rshaw@neuralmagic.com <robertgshaw2@gmail.com> * updated Signed-off-by: rshaw@neuralmagic.com <robertgshaw2@gmail.com> * update Signed-off-by: rshaw@neuralmagic.com <robertgshaw2@gmail.com> * remove Signed-off-by: rshaw@neuralmagic.com <robertgshaw2@gmail.com> * updated Signed-off-by: rshaw@neuralmagic.com <robertgshaw2@gmail.com> * updated Signed-off-by: rshaw@neuralmagic.com <robertgshaw2@gmail.com> * updated Signed-off-by: rshaw@neuralmagic.com <robertgshaw2@gmail.com> * updated Signed-off-by: rshaw@neuralmagic.com <robertgshaw2@gmail.com> * updated Signed-off-by: rshaw@neuralmagic.com <robertgshaw2@gmail.com> * seems to load properly Signed-off-by: rshaw@neuralmagic.com <robertgshaw2@gmail.com> * updated Signed-off-by: rshaw@neuralmagic.com <robertgshaw2@gmail.com> * updated Signed-off-by: rshaw@neuralmagic.com <robertgshaw2@gmail.com> * updated Signed-off-by: rshaw@neuralmagic.com <robertgshaw2@gmail.com> * updated Signed-off-by: rshaw@neuralmagic.com <robertgshaw2@gmail.com> * updated Signed-off-by: rshaw@neuralmagic.com <robertgshaw2@gmail.com> * updated Signed-off-by: rshaw@neuralmagic.com <robertgshaw2@gmail.com> * Revert "updated" This reverts commit 97316d9. * updated Signed-off-by: rshaw@neuralmagic.com <robertgshaw2@gmail.com> * updated Signed-off-by: rshaw@neuralmagic.com <robertgshaw2@gmail.com> * updated Signed-off-by: rshaw@neuralmagic.com <robertgshaw2@gmail.com> * updated Signed-off-by: rshaw@neuralmagic.com <robertgshaw2@gmail.com> * updated Signed-off-by: rshaw@neuralmagic.com <robertgshaw2@gmail.com> * stash Signed-off-by: rshaw@neuralmagic.com <robertgshaw2@gmail.com> * added Signed-off-by: rshaw@neuralmagic.com <robertgshaw2@gmail.com> * diffs for local dev on macos Signed-off-by: Robert Shaw <rshaw@neuralmagic.com> * updated Signed-off-by: Robert Shaw <rshaw@neuralmagic.com> * update Signed-off-by: Robert Shaw <rshaw@neuralmagic.com> * updaed Signed-off-by: Robert Shaw <rshaw@neuralmagic.com> * updated Signed-off-by: Robert Shaw <rshaw@neuralmagic.com> * updated Signed-off-by: Robert Shaw <rshaw@neuralmagic.com> * Checkpoint. Signed-off-by: Tyler Michael Smith <tyler@neuralmagic.com> * updated Signed-off-by: Robert Shaw <rshaw@neuralmagic.com> * Cleanup Signed-off-by: Tyler Michael Smith <tyler@neuralmagic.com> * WIP Signed-off-by: Tyler Michael Smith <tyler@neuralmagic.com> * updated Signed-off-by: Robert Shaw <rshaw@neuralmagic.com> * updated Signed-off-by: Robert Shaw <rshaw@neuralmagic.com> * updated on scheduler side Signed-off-by: Robert Shaw <rshaw@neuralmagic.com> * updated Signed-off-by: Robert Shaw <rshaw@neuralmagic.com> * updated Signed-off-by: Robert Shaw <rshaw@neuralmagic.com> * updated Signed-off-by: Robert Shaw <rshaw@neuralmagic.com> * updated Signed-off-by: Robert Shaw <rshaw@neuralmagic.com> * updated Signed-off-by: Robert Shaw <rshaw@neuralmagic.com> * updated Signed-off-by: Robert Shaw <rshaw@neuralmagic.com> * Hacking away Signed-off-by: Tyler Michael Smith <tyler@neuralmagic.com> * cleanup Signed-off-by: Robert Shaw <rshaw@neuralmagic.com> * ensure request removed from running list Signed-off-by: Robert Shaw <rshaw@neuralmagic.com> * Runs E2E. Garbage output. Crashes on 2nd request Signed-off-by: Tyler Michael Smith <tyler@neuralmagic.com> * update Signed-off-by: Tyler Michael Smith <tyler@neuralmagic.com> * updated Signed-off-by: Robert Shaw <rshaw@neuralmagic.com> * updated Signed-off-by: Robert Shaw <rshaw@neuralmagic.com> * rename files Signed-off-by: Robert Shaw <rshaw@neuralmagic.com> * updated Signed-off-by: Robert Shaw <rshaw@neuralmagic.com> * updated Signed-off-by: Robert Shaw <rshaw@neuralmagic.com> * updated Signed-off-by: Robert Shaw <rshaw@neuralmagic.com> * updated Signed-off-by: Robert Shaw <rshaw@neuralmagic.com> * updated Signed-off-by: Robert Shaw <rshaw@neuralmagic.com> * update Signed-off-by: Robert Shaw <rshaw@neuralmagic.com> * Second request no longer crashes Signed-off-by: Tyler Michael Smith <tyler@neuralmagic.com> * Remove gpu_model_runner hacks Signed-off-by: Tyler Michael Smith <tyler@neuralmagic.com> * Clean up Justfile Signed-off-by: Tyler Michael Smith <tyler@neuralmagic.com> * [Bugfix] Stale finished requests in EMPTY_MODEL_RUNNER_OUTPUT Signed-off-by: Tyler Michael Smith <tyler@neuralmagic.com> * update Signed-off-by: Tyler Michael Smith <tyler@neuralmagic.com> * justfile edits Signed-off-by: Tyler Michael Smith <tyler@neuralmagic.com> * Update Signed-off-by: Tyler Michael Smith <tyler@neuralmagic.com> * Fixes - lm_eval gsm8k has correctness Signed-off-by: Tyler Michael Smith <tyler@neuralmagic.com> * "just delete the assert" Signed-off-by: Tyler Michael Smith <tyler@neuralmagic.com> * fixup precommit issues Signed-off-by: Tyler Michael Smith <tyler@neuralmagic.com> * Fixes Signed-off-by: Tyler Michael Smith <tyler@neuralmagic.com> * updated (#12) Signed-off-by: rshaw@neuralmagic.com <robertgshaw2@gmail.com> * Add Accuracy Test (#13) * updated Signed-off-by: rshaw@neuralmagic.com <robertgshaw2@gmail.com> * updated Signed-off-by: rshaw@neuralmagic.com <robertgshaw2@gmail.com> * updated Signed-off-by: rshaw@neuralmagic.com <robertgshaw2@gmail.com> * updated Signed-off-by: rshaw@neuralmagic.com <robertgshaw2@gmail.com> --------- Signed-off-by: rshaw@neuralmagic.com <robertgshaw2@gmail.com> * Preemption Bugfixes (#15) * stash fixed double free issue Signed-off-by: rshaw@neuralmagic.com <robertgshaw2@gmail.com> * updated Signed-off-by: rshaw@neuralmagic.com <robertgshaw2@gmail.com> * updated Signed-off-by: rshaw@neuralmagic.com <robertgshaw2@gmail.com> * updated Signed-off-by: rshaw@neuralmagic.com <robertgshaw2@gmail.com> * updated Signed-off-by: rshaw@neuralmagic.com <robertgshaw2@gmail.com> * updated Signed-off-by: rshaw@neuralmagic.com <robertgshaw2@gmail.com> * updated Signed-off-by: rshaw@neuralmagic.com <robertgshaw2@gmail.com> * fixed issue Signed-off-by: rshaw@neuralmagic.com <robertgshaw2@gmail.com> * updated Signed-off-by: rshaw@neuralmagic.com <robertgshaw2@gmail.com> * updatrd Signed-off-by: rshaw@neuralmagic.com <robertgshaw2@gmail.com> * updatrd Signed-off-by: rshaw@neuralmagic.com <robertgshaw2@gmail.com> * updatrd Signed-off-by: rshaw@neuralmagic.com <robertgshaw2@gmail.com> * updatrd Signed-off-by: rshaw@neuralmagic.com <robertgshaw2@gmail.com> * updatrd Signed-off-by: rshaw@neuralmagic.com <robertgshaw2@gmail.com> * updatrd Signed-off-by: rshaw@neuralmagic.com <robertgshaw2@gmail.com> --------- Signed-off-by: rshaw@neuralmagic.com <robertgshaw2@gmail.com> * updated (#16) Signed-off-by: rshaw@neuralmagic.com <robertgshaw2@gmail.com> * Fix Bad Merge | Fix Memory Leak in Upstream (vllm-project#18) * updated Signed-off-by: rshaw@neuralmagic.com <robertgshaw2@gmail.com> * fix merge Signed-off-by: rshaw@neuralmagic.com <robertgshaw2@gmail.com> * updated Signed-off-by: rshaw@neuralmagic.com <robertgshaw2@gmail.com> * updated Signed-off-by: rshaw@neuralmagic.com <robertgshaw2@gmail.com> * updated Signed-off-by: rshaw@neuralmagic.com <robertgshaw2@gmail.com> * updated Signed-off-by: rshaw@neuralmagic.com <robertgshaw2@gmail.com> --------- Signed-off-by: rshaw@neuralmagic.com <robertgshaw2@gmail.com> * updated Signed-off-by: rshaw@neuralmagic.com <robertgshaw2@gmail.com> * cleanup code Signed-off-by: rshaw@neuralmagic.com <robertgshaw2@gmail.com> * cleanup code Signed-off-by: rshaw@neuralmagic.com <robertgshaw2@gmail.com> * updated Signed-off-by: rshaw@neuralmagic.com <robertgshaw2@gmail.com> * updated Signed-off-by: rshaw@neuralmagic.com <robertgshaw2@gmail.com> * updated Signed-off-by: rshaw@neuralmagic.com <robertgshaw2@gmail.com> * stash Signed-off-by: rshaw@neuralmagic.com <robertgshaw2@gmail.com> * updated Signed-off-by: rshaw@neuralmagic.com <robertgshaw2@gmail.com> * updated Signed-off-by: rshaw@neuralmagic.com <robertgshaw2@gmail.com> * updatted Signed-off-by: rshaw@neuralmagic.com <robertgshaw2@gmail.com> * updated Signed-off-by: rshaw@neuralmagic.com <robertgshaw2@gmail.com> * updated Signed-off-by: rshaw@neuralmagic.com <robertgshaw2@gmail.com> * revert Signed-off-by: rshaw@neuralmagic.com <robertgshaw2@gmail.com> * more spurious changes Signed-off-by: rshaw@neuralmagic.com <robertgshaw2@gmail.com> * updated Signed-off-by: rshaw@neuralmagic.com <robertgshaw2@gmail.com> * updated Signed-off-by: rshaw@neuralmagic.com <robertgshaw2@gmail.com> * updated Signed-off-by: rshaw@neuralmagic.com <robertgshaw2@gmail.com> * updated Signed-off-by: rshaw@neuralmagic.com <robertgshaw2@gmail.com> * updated Signed-off-by: rshaw@neuralmagic.com <robertgshaw2@gmail.com> * updated Signed-off-by: rshaw@neuralmagic.com <robertgshaw2@gmail.com> * updated Signed-off-by: rshaw@neuralmagic.com <robertgshaw2@gmail.com> * updated Signed-off-by: rshaw@neuralmagic.com <robertgshaw2@gmail.com> * updated Signed-off-by: rshaw@neuralmagic.com <robertgshaw2@gmail.com> * updated Signed-off-by: rshaw@neuralmagic.com <robertgshaw2@gmail.com> * updated Signed-off-by: rshaw@neuralmagic.com <robertgshaw2@gmail.com> * updated Signed-off-by: rshaw@neuralmagic.com <robertgshaw2@gmail.com> * updated Signed-off-by: rshaw@neuralmagic.com <robertgshaw2@gmail.com> * updated Signed-off-by: rshaw@neuralmagic.com <robertgshaw2@gmail.com> * updated Signed-off-by: rshaw@neuralmagic.com <robertgshaw2@gmail.com> * updated Signed-off-by: rshaw@neuralmagic.com <robertgshaw2@gmail.com> * Support MLA in NIXL connector Signed-off-by: Tyler Michael Smith <tyler@neuralmagic.com> * WIP adding tests Signed-off-by: Tyler Michael Smith <tyler@neuralmagic.com> * wip Signed-off-by: Tyler Michael Smith <tyler@neuralmagic.com> * Fixes Signed-off-by: Tyler Michael Smith <tyler@neuralmagic.com> --------- Signed-off-by: ApostaC <yihua98@uchicago.edu> Signed-off-by: Tyler Michael Smith <tyler@neuralmagic.com> Signed-off-by: rshaw@neuralmagic.com <robertgshaw2@gmail.com> Signed-off-by: Robert Shaw <rshaw@neuralmagic.com> Co-authored-by: ApostaC <yihua98@uchicago.edu> Co-authored-by: Robert Shaw <114415538+robertgshaw2-redhat@users.noreply.github.com> Co-authored-by: rshaw@neuralmagic.com <robertgshaw2@gmail.com> Co-authored-by: Robert Shaw <rshaw@neuralmagic.com>
This PR implements a new preemption (eviction) mechanism "recomputation". In our benchmark results, recomputation is more efficient than swapping, because swapping incurs significant overheads due to numerous small data transfers between CPU and GPU. Thus, we use recomputation for our default preemption mechanism.
However, currently we do not support recomputation for sequence groups with multiple sequences. This is because when token blocks are shared, the recomputation logic becomes very complex and we do not have CUDA kernels to efficiently support it. We will use swapping for this case despite its overheads.
Besides, this PR also refactors the scheduling logic to be easier to understand.