Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[SpecDecode][Kernel] Use Flashinfer for Rejection Sampling in Speculative Decoding #7244

Merged

Conversation

LiuXiaoxuanPKU
Copy link
Collaborator

End to end Speculative Decoding Performance (request latency):
Draft: LLama-160M, Target: Vicuna-7B, batch size=8, input_len=256, output_len=512

Before this PR:

Avg latency: 5.9652480507269505 seconds
10% percentile latency: 5.729408229794354 seconds
25% percentile latency: 5.794497653492726 seconds
50% percentile latency: 5.954964595614001 seconds
75% percentile latency: 6.124162045423873 seconds
90% percentile latency: 6.1757167306263 seconds
99% percentile latency: 6.3235187567165125 seconds

After this PR:

Avg latency: 5.717350374627858 seconds
10% percentile latency: 5.423373702447861 seconds
25% percentile latency: 5.50113928236533 seconds
50% percentile latency: 5.768905333476141 seconds
75% percentile latency: 5.950261808000505 seconds
90% percentile latency: 5.9829305820167065 seconds
99% percentile latency: 5.992807335173711 seconds

Copy link

github-actions bot commented Aug 7, 2024

👋 Hi! Thank you for contributing to the vLLM project.
Just a reminder: PRs would not trigger full CI run by default. Instead, it would only run fastcheck CI which consists a small and essential subset of CI tests to quickly catch errors. You can run other CI tests on top of default ones by unblocking the steps in your fast-check build on Buildkite UI.

Once the PR is approved and ready to go, please make sure to run full CI as it is required to merge (or just use auto-merge).

To run full CI, you can do one of these:

  • Comment /ready on the PR
  • Add ready label to the PR
  • Enable auto-merge.

🚀

Copy link
Collaborator

@cadedaniel cadedaniel left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM. two qs:

  • can we run the correctness test for both paths? specifically the convergence test (all of the e2e tests depend on this for temp>0)
  • can we make sure there is no perf regression for the non-FlashInfer path?

vllm/model_executor/layers/spec_decode_base_sampler.py Outdated Show resolved Hide resolved
vllm/spec_decode/spec_decode_worker.py Outdated Show resolved Hide resolved
vllm/model_executor/layers/spec_decode_base_sampler.py Outdated Show resolved Hide resolved
vllm/model_executor/layers/rejection_sampler.py Outdated Show resolved Hide resolved
vllm/model_executor/layers/rejection_sampler.py Outdated Show resolved Hide resolved
vllm/model_executor/layers/rejection_sampler.py Outdated Show resolved Hide resolved
vllm/model_executor/layers/rejection_sampler.py Outdated Show resolved Hide resolved
vllm/model_executor/layers/rejection_sampler.py Outdated Show resolved Hide resolved
@comaniac
Copy link
Collaborator

comaniac commented Aug 7, 2024

Note: there's a bugfix for correctness issue in this sampling kernel (flashinfer-ai/flashinfer#425), so we may want to bump FlashInfer to the next release.

@yzh119
Copy link

yzh119 commented Aug 7, 2024

Note: there's a bugfix for correctness issue in this sampling kernel (flashinfer-ai/flashinfer#425), so we may want to bump FlashInfer to the next release.

Yes @LiuXiaoxuanPKU 's number are measured with flashinfer main branch where #425 was already merged. This PR depends on flashinfer v0.1.4.

@LiuXiaoxuanPKU
Copy link
Collaborator Author

@cadedaniel
The PR is ready for a second review. It should be pass speculative decoding tests, but I do have some questions & concerns.

  1. Currently, we update the metrics in the _create_output, however, with the rejection sampling kernel, we will not go through that code pass. Therefore, we will update the metrics outside (
    self.num_accepted_tokens += batch_size * k
    ), not sure if it's good practice.
  2. The rejection sampler does not return vllm speculative decoding's definition of accepted tokens, therefore the metric is not meaningful for now. I update in that way to just pass the CI.
  3. In the current CI, we install flashinfer by default. Therefore, all speculative decoding tests (including rejection sampling) are using the kernel. Do we want to keep the test for the old code? Will that be a bit expensive for CI?

@LiuXiaoxuanPKU
Copy link
Collaborator Author

LiuXiaoxuanPKU commented Aug 18, 2024

@cadedaniel Updates:

  1. modify the flashinfer kernel to return the metric required by vllm spec dec metric. PR: add accept num, emit num metric for ChainSpeculativeSampling flashinfer-ai/flashinfer#450
  2. change rejection sampler to mainly test the flashinfer backend.
  3. add tests to compare the flashinfer and nonflashinfer backend results.

This PR is ready, CI tests might fail because we will need flashinfer to release and add the latest flashinfer to CI. Tests passed locally.

Copy link
Collaborator

@comaniac comaniac left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Overall LGTM. Just nits.

@@ -84,8 +82,8 @@ def test_correct_output_format(which_tokens_accepted: str,
size=(batch_size, 1),
dtype=torch.int64)

rejection_sampler = RejectionSampler(
disable_bonus_tokens=disable_bonus_tokens)
rejection_sampler = RejectionSampler(disable_bonus_tokens=False,
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I feel you can leave this test untouched, and just rename the follow test to "test_flashinfer_backed".

tests/samplers/test_rejection_sampler.py Outdated Show resolved Hide resolved
@@ -29,10 +45,14 @@ def __init__(self,
"""
super().__init__(disable_bonus_tokens=disable_bonus_tokens,
strict_mode=strict_mode)
self.use_flashinfer = use_flashinfer
if self.use_flashinfer:
assert not disable_bonus_tokens, \
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Could be just warning?

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ideally, when disable_bonus_tokens, the bonus token should be -1.
However, if we use flashinfer and set disable_bonus_tokens, the bonus token will still have values (!= -1), which makes the results incorrect. I guess it might be better to just fail here?

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

we can remove the disable_bonus_token path completely now that #4212 is fixed.

but if it's too much work let's just leave it as assert, that way "no failure" means user gets the experience we planned for them instead of missing a warning and getting subpar perf

vllm/model_executor/layers/spec_decode_base_sampler.py Outdated Show resolved Hide resolved
@cadedaniel
Copy link
Collaborator

Will review today.

vllm/model_executor/layers/rejection_sampler.py Outdated Show resolved Hide resolved
vllm/model_executor/layers/rejection_sampler.py Outdated Show resolved Hide resolved
@@ -29,10 +45,14 @@ def __init__(self,
"""
super().__init__(disable_bonus_tokens=disable_bonus_tokens,
strict_mode=strict_mode)
self.use_flashinfer = use_flashinfer
if self.use_flashinfer:
assert not disable_bonus_tokens, \
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

we can remove the disable_bonus_token path completely now that #4212 is fixed.

but if it's too much work let's just leave it as assert, that way "no failure" means user gets the experience we planned for them instead of missing a warning and getting subpar perf

vllm/model_executor/layers/rejection_sampler.py Outdated Show resolved Hide resolved
vllm/model_executor/layers/spec_decode_base_sampler.py Outdated Show resolved Hide resolved
tests/samplers/test_rejection_sampler.py Outdated Show resolved Hide resolved
tests/samplers/test_rejection_sampler.py Outdated Show resolved Hide resolved
tests/samplers/test_rejection_sampler.py Outdated Show resolved Hide resolved
@yzh119
Copy link

yzh119 commented Aug 28, 2024

FYI: flashinfer v0.1.6 wheels are ready: https://github.com/flashinfer-ai/flashinfer/releases/tag/v0.1.6

@LiuXiaoxuanPKU
Copy link
Collaborator Author

/ready

@github-actions github-actions bot added the ready ONLY add when PR is ready to merge/full CI is needed label Aug 28, 2024
batch_size: int, device: str):

def get_seeded_seqs():
seeded_mask = torch.rand(batch_size, dtype=torch.float32) <= 1.0
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think this needs to go out of the helper function, else the rand will be different

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I realize there's an error here -- it should be torch.rand(...) <= 0.5

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think I will just remove the rand, we should just fix the generator for each request in the batch instead of fixing it with 50% probability.

tests/samplers/test_rejection_sampler.py Show resolved Hide resolved
@@ -130,6 +136,9 @@ def forward(

# num_emitted_tokens returned by flashinfer
# does not include the bonus token
# Flashinfer stops at the first token that violates
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why not just align flashinfer's behavior and this API's?

@youkaichao youkaichao merged commit e6a26ed into vllm-project:main Sep 2, 2024
55 of 58 checks passed
triple-Mu pushed a commit to triple-Mu/vllm_official that referenced this pull request Sep 4, 2024
dsikka pushed a commit to neuralmagic/vllm that referenced this pull request Sep 5, 2024
opus24 added a commit to Hyper-Accel/vllm that referenced this pull request Sep 10, 2024
commit a1d8742
Author: Simon Mo <simon.mo@hey.com>
Date:   Mon Sep 9 23:21:00 2024 -0700

    Add NVIDIA Meetup slides, announce AMD meetup, and add contact info (vllm-project#8319)

commit 6cd5e5b
Author: Dipika Sikka <dipikasikka1@gmail.com>
Date:   Mon Sep 9 23:02:52 2024 -0400

    [Misc] Fused MoE Marlin support for GPTQ (vllm-project#8217)

commit c7cb5c3
Author: Kyle Sayers <kylesayrs@gmail.com>
Date:   Mon Sep 9 16:27:26 2024 -0400

    [Misc] GPTQ Activation Ordering (vllm-project#8135)

commit f9b4a2d
Author: Vladislav Kruglikov <vladislavkruglikov@outlook.com>
Date:   Mon Sep 9 21:20:46 2024 +0300

    [Bugfix] Correct adapter usage for cohere and jamba (vllm-project#8292)

commit 58fcc85
Author: Adam Lugowski <alugowski@gmail.com>
Date:   Mon Sep 9 11:16:37 2024 -0700

    [Frontend] Add progress reporting to run_batch.py (vllm-project#8060)

    Co-authored-by: Adam Lugowski <adam.lugowski@parasail.io>

commit 08287ef
Author: Kyle Mistele <kyle@mistele.com>
Date:   Mon Sep 9 09:45:11 2024 -0500

    [Bugfix] Streamed tool calls now more strictly follow OpenAI's format; ensures Vercel AI SDK compatibility (vllm-project#8272)

commit 4ef41b8
Author: Alexander Matveev <59768536+alexm-neuralmagic@users.noreply.github.com>
Date:   Sun Sep 8 00:01:51 2024 -0400

    [Bugfix] Fix async postprocessor in case of preemption (vllm-project#8267)

commit cfe712b
Author: Joe Runde <Joseph.Runde@ibm.com>
Date:   Sat Sep 7 14:03:16 2024 -0600

    [CI/Build] Use python 3.12 in cuda image (vllm-project#8133)

    Signed-off-by: Joe Runde <Joseph.Runde@ibm.com>

commit b962ee1
Author: sumitd2 <91451282+sumitd2@users.noreply.github.com>
Date:   Sat Sep 7 23:48:40 2024 +0530

    ppc64le: Dockerfile fixed, and a script for buildkite (vllm-project#8026)

commit 36bf815
Author: Isotr0py <2037008807@qq.com>
Date:   Sun Sep 8 01:45:44 2024 +0800

    [Model][VLM] Decouple weight loading logic for `Paligemma` (vllm-project#8269)

commit e807125
Author: Isotr0py <2037008807@qq.com>
Date:   Sat Sep 7 16:38:23 2024 +0800

    [Model][VLM] Support multi-images inputs for InternVL2 models (vllm-project#8201)

commit 9f68e00
Author: Cyrus Leung <tlleungac@connect.ust.hk>
Date:   Sat Sep 7 16:02:39 2024 +0800

    [Bugfix] Fix broken OpenAI tensorizer test (vllm-project#8258)

commit ce2702a
Author: youkaichao <youkaichao@gmail.com>
Date:   Fri Sep 6 22:40:46 2024 -0700

    [tpu][misc] fix typo (vllm-project#8260)

commit 795b662
Author: Wei-Sheng Chin <wschin@outlook.com>
Date:   Fri Sep 6 20:18:16 2024 -0700

    Enable Random Prefix Caching in Serving Profiling Tool (benchmark_serving.py) (vllm-project#8241)

commit 2f707fc
Author: Cyrus Leung <tlleungac@connect.ust.hk>
Date:   Sat Sep 7 10:57:24 2024 +0800

    [Model] Multi-input support for LLaVA (vllm-project#8238)

commit 41e95c5
Author: Kyle Mistele <kyle@mistele.com>
Date:   Fri Sep 6 21:49:01 2024 -0500

    [Bugfix] Fix Hermes tool call chat template bug (vllm-project#8256)

    Co-authored-by: Kyle Mistele <kyle@constellate.ai>

commit 12dd715
Author: William Lin <SolitaryThinker@users.noreply.github.com>
Date:   Fri Sep 6 17:48:48 2024 -0700

    [misc] [doc] [frontend] LLM torch profiler support (vllm-project#7943)

commit 29f49cd
Author: Patrick von Platen <patrick.v.platen@gmail.com>
Date:   Sat Sep 7 01:02:05 2024 +0200

    [Model] Allow loading from original Mistral format (vllm-project#8168)

    Co-authored-by: Michael Goin <michael@neuralmagic.com>

commit 23f3222
Author: Dipika Sikka <dipikasikka1@gmail.com>
Date:   Fri Sep 6 18:29:03 2024 -0400

    [Misc] Remove `SqueezeLLM` (vllm-project#8220)

commit 9db52ea
Author: rasmith <Randall.Smith@amd.com>
Date:   Fri Sep 6 17:26:09 2024 -0500

    [Kernel] [Triton] Memory optimization for awq_gemm and awq_dequantize, 2x throughput (vllm-project#8248)

commit 1447c97
Author: Alexey Kondratiev(AMD) <143633163+alexeykondrat@users.noreply.github.com>
Date:   Fri Sep 6 14:51:03 2024 -0400

    [CI/Build] Increasing timeout for multiproc worker tests (vllm-project#8203)

commit de80783
Author: Rui Qiao <161574667+ruisearch42@users.noreply.github.com>
Date:   Fri Sep 6 09:18:35 2024 -0700

    [Misc] Use ray[adag] dependency instead of cuda (vllm-project#7938)

commit e5cab71
Author: afeldman-nm <156691304+afeldman-nm@users.noreply.github.com>
Date:   Fri Sep 6 12:01:14 2024 -0400

    [Frontend] Add --logprobs argument to `benchmark_serving.py` (vllm-project#8191)

commit baa5467
Author: Nick Hill <nickhill@us.ibm.com>
Date:   Thu Sep 5 20:39:29 2024 -0700

    [BugFix] Fix Granite model configuration (vllm-project#8216)

commit db3bf7c
Author: Jiaxin Shan <seedjeffwan@gmail.com>
Date:   Thu Sep 5 18:10:33 2024 -0700

    [Core] Support load and unload LoRA in api server (vllm-project#6566)

    Co-authored-by: Jee Jee Li <pandaleefree@gmail.com>

commit 2febcf2
Author: sroy745 <142070531+sroy745@users.noreply.github.com>
Date:   Thu Sep 5 13:25:29 2024 -0700

    [Documentation][Spec Decode] Add documentation about lossless guarantees in Speculative Decoding in vLLM (vllm-project#7962)

commit 2ee4528
Author: Michael Goin <michael@neuralmagic.com>
Date:   Thu Sep 5 11:09:46 2024 -0400

    Move verify_marlin_supported to GPTQMarlinLinearMethod (vllm-project#8165)

commit 9da25a8
Author: Alex Brooks <alex.brooks@ibm.com>
Date:   Thu Sep 5 06:48:10 2024 -0600

    [MODEL] Qwen Multimodal Support (Qwen-VL / Qwen-VL-Chat) (vllm-project#8029)

    Signed-off-by: Alex-Brooks <Alex.Brooks@ibm.com>
    Co-authored-by: DarkLight1337 <tlleungac@connect.ust.hk>

commit 8685ba1
Author: manikandan.tm@zucisystems.com <94887255+Manikandan-Thangaraj-ZS0321@users.noreply.github.com>
Date:   Thu Sep 5 17:03:37 2024 +0530

    Inclusion of InternVLChatModel In PP_SUPPORTED_MODELS(Pipeline Parallelism) (vllm-project#7860)

commit 288a938
Author: Cyrus Leung <tlleungac@connect.ust.hk>
Date:   Thu Sep 5 18:51:53 2024 +0800

    [Doc] Indicate more information about supported modalities (vllm-project#8181)

commit e39ebf5
Author: Elfie Guo <164945471+elfiegg@users.noreply.github.com>
Date:   Wed Sep 4 22:12:26 2024 -0700

    [Core/Bugfix] Add query dtype as per FlashInfer API requirements. (vllm-project#8173)

commit ba262c4
Author: Kevin H. Luu <kevin@anyscale.com>
Date:   Wed Sep 4 20:33:12 2024 -0700

    [ci] Mark LoRA test as soft-fail (vllm-project#8160)

    Signed-off-by: kevin <kevin@anyscale.com>

commit 4624d98
Author: Woosuk Kwon <woosuk.kwon@berkeley.edu>
Date:   Wed Sep 4 20:31:48 2024 -0700

    [Misc] Clean up RoPE forward_native (vllm-project#8076)

commit 1afc931
Author: William Lin <SolitaryThinker@users.noreply.github.com>
Date:   Wed Sep 4 17:35:36 2024 -0700

    [bugfix] >1.43 constraint for openai (vllm-project#8169)

    Co-authored-by: Michael Goin <michael@neuralmagic.com>

commit e01c2be
Author: Maureen McElaney <mmcelaney@users.noreply.github.com>
Date:   Wed Sep 4 19:50:13 2024 -0400

    [Doc] [Misc] Create CODE_OF_CONDUCT.md (vllm-project#8161)

commit 32e7db2
Author: Simon Mo <simon.mo@hey.com>
Date:   Wed Sep 4 16:34:27 2024 -0700

    Bump version to v0.6.0 (vllm-project#8166)

commit 008cf88
Author: Harsha vardhan manoj Bikki <39381063+hbikki@users.noreply.github.com>
Date:   Wed Sep 4 16:33:43 2024 -0700

    [Neuron] Adding support for adding/ overriding neuron configuration a… (vllm-project#8062)

    Co-authored-by: Harsha Bikki <harbikh@amazon.com>

commit 77d9e51
Author: Cody Yu <hao.yu.cody@gmail.com>
Date:   Wed Sep 4 13:23:22 2024 -0700

    [MISC] Replace input token throughput with total token throughput (vllm-project#8164)

    Co-authored-by: Michael Goin <michael@neuralmagic.com>

commit e02ce49
Author: Kyle Mistele <kyle@mistele.com>
Date:   Wed Sep 4 15:18:13 2024 -0500

    [Feature] OpenAI-Compatible Tools API + Streaming for Hermes & Mistral models (vllm-project#5649)

    Co-authored-by: constellate <constellate@1-ai-appserver-staging.codereach.com>
    Co-authored-by: Kyle Mistele <kyle@constellate.ai>

commit 561d6f8
Author: Woosuk Kwon <woosuk.kwon@berkeley.edu>
Date:   Wed Sep 4 13:05:50 2024 -0700

    [CI] Change test input in Gemma LoRA test (vllm-project#8163)

commit d1dec64
Author: alexeykondrat <143633163+alexeykondrat@users.noreply.github.com>
Date:   Wed Sep 4 14:57:54 2024 -0400

    [CI/Build][ROCm] Enabling LoRA tests on ROCm (vllm-project#7369)

    Co-authored-by: Simon Mo <simon.mo@hey.com>

commit 2ad2e56
Author: Cody Yu <hao.yu.cody@gmail.com>
Date:   Wed Sep 4 11:53:25 2024 -0700

    [MISC] Consolidate FP8 kv-cache tests (vllm-project#8131)

commit d331156
Author: wnma <wnma3mz@gmail.com>
Date:   Wed Sep 4 18:55:37 2024 +0800

    [Bugfix] remove post_layernorm in siglip (vllm-project#8106)

commit ccd7207
Author: TimWang <7367474+haitwang-cloud@users.noreply.github.com>
Date:   Wed Sep 4 14:17:05 2024 +0800

    chore: Update check-wheel-size.py to read MAX_SIZE_MB from env (vllm-project#8103)

commit 855c262
Author: Cyrus Leung <tlleungac@connect.ust.hk>
Date:   Wed Sep 4 13:22:17 2024 +0800

    [Frontend] Multimodal support in offline chat (vllm-project#8098)

commit 2be8ec6
Author: Peter Salas <peter@fixie.ai>
Date:   Tue Sep 3 21:38:21 2024 -0700

    [Model] Add Ultravox support for multiple audio chunks (vllm-project#7963)

commit e16fa99
Author: Dipika Sikka <dipikasikka1@gmail.com>
Date:   Tue Sep 3 22:12:41 2024 -0400

    [Misc] Update fbgemmfp8 to use `vLLMParameters` (vllm-project#7972)

    Co-authored-by: Michael Goin <michael@neuralmagic.com>

commit 61f4a93
Author: Woosuk Kwon <woosuk.kwon@berkeley.edu>
Date:   Tue Sep 3 18:35:33 2024 -0700

    [TPU][Bugfix] Use XLA rank for persistent cache path (vllm-project#8137)

commit d4db9f5
Author: Nick Hill <nickhill@us.ibm.com>
Date:   Tue Sep 3 17:57:41 2024 -0700

    [Benchmark] Add `--async-engine` option to benchmark_throughput.py (vllm-project#7964)

commit 2188a60
Author: Dipika Sikka <dipikasikka1@gmail.com>
Date:   Tue Sep 3 17:21:44 2024 -0400

    [Misc] Update `GPTQ` to use `vLLMParameters` (vllm-project#7976)

commit dc0b606
Author: Simon Mo <simon.mo@hey.com>
Date:   Tue Sep 3 14:11:42 2024 -0700

    [CI] Change PR remainder to avoid at-mentions (vllm-project#8134)

commit 0af3abe
Author: Woosuk Kwon <woosuk.kwon@berkeley.edu>
Date:   Tue Sep 3 13:29:24 2024 -0700

    [TPU][Bugfix] Fix next_token_ids shape (vllm-project#8128)

commit f1575dc
Author: Kevin H. Luu <kevin@anyscale.com>
Date:   Tue Sep 3 13:25:09 2024 -0700

    [ci] Fix GHA workflow  (vllm-project#8129)

    Signed-off-by: kevin <kevin@anyscale.com>

commit c02638e
Author: tomeras91 <57313761+tomeras91@users.noreply.github.com>
Date:   Tue Sep 3 22:37:08 2024 +0300

    [CI/Build] make pip install vllm work in macos (for import only) (vllm-project#8118)

commit 652c83b
Author: Antoni Baum <antoni.baum@protonmail.com>
Date:   Tue Sep 3 12:28:25 2024 -0700

    [Misc] Raise a more informative exception in add/remove_logger (vllm-project#7750)

commit 6d646d0
Author: Alexander Matveev <59768536+alexm-neuralmagic@users.noreply.github.com>
Date:   Tue Sep 3 14:50:29 2024 -0400

    [Core] Optimize Async + Multi-step (vllm-project#8050)

commit 95a178f
Author: Kevin H. Luu <kevin@anyscale.com>
Date:   Tue Sep 3 11:32:27 2024 -0700

    [CI] Only PR reviewers/committers can trigger CI on PR (vllm-project#8124)

    Signed-off-by: kevin <kevin@anyscale.com>

commit bd852f2
Author: Cody Yu <hao.yu.cody@gmail.com>
Date:   Tue Sep 3 10:49:18 2024 -0700

    [Performance] Enable chunked prefill and prefix caching together (vllm-project#8120)

    Co-authored-by: Tao He <sighingnow@gmail.com>
    Co-authored-by: Juelianqvq <Juelianqvq@noreply.github.com>

commit ec26653
Author: Isotr0py <2037008807@qq.com>
Date:   Tue Sep 3 21:37:52 2024 +0800

    [Bugfix][VLM] Add fallback to SDPA for ViT model running on CPU backend (vllm-project#8061)

commit 0fbc669
Author: Woosuk Kwon <woosuk.kwon@berkeley.edu>
Date:   Mon Sep 2 20:35:42 2024 -0700

    [Bugfix] Fix single output condition in output processor (vllm-project#7881)

commit 6e36f4f
Author: wang.yuqi <noooop@126.com>
Date:   Tue Sep 3 05:20:12 2024 +0800

    improve chunked prefill performance

    [Bugfix] Fix vllm-project#7592 vllm 0.5.4 enable_chunked_prefill throughput is slightly lower than 0.5.3~0.5.0. (vllm-project#7874)

commit dd2a6a8
Author: Isotr0py <2037008807@qq.com>
Date:   Mon Sep 2 23:48:56 2024 +0800

    [Bugfix] Fix internlm2 tensor parallel inference (vllm-project#8055)

commit 4ca65a9
Author: Isotr0py <2037008807@qq.com>
Date:   Mon Sep 2 20:43:26 2024 +0800

    [Core][Bugfix] Accept GGUF model without .gguf extension (vllm-project#8056)

commit e2b2aa5
Author: Woosuk Kwon <woosuk.kwon@berkeley.edu>
Date:   Sun Sep 1 23:09:46 2024 -0700

    [TPU] Align worker index with node boundary (vllm-project#7932)

commit e6a26ed
Author: Lily Liu <lilyliupku@gmail.com>
Date:   Sun Sep 1 21:23:29 2024 -0700

    [SpecDecode][Kernel] Flashinfer Rejection Sampling (vllm-project#7244)

commit f8d6014
Author: Shawn Tan <shawn@wtf.sg>
Date:   Sun Sep 1 21:37:18 2024 -0400

    [Model] Add Granite model (vllm-project#7436)

    Co-authored-by: Nick Hill <nickhill@us.ibm.com>

commit 5b86b19
Author: Roger Wang <136131678+ywang96@users.noreply.github.com>
Date:   Sun Sep 1 14:46:57 2024 -0700

    [Misc] Optional installation of audio related packages (vllm-project#8063)

commit 5231f08
Author: Roger Wang <136131678+ywang96@users.noreply.github.com>
Date:   Sat Aug 31 16:35:53 2024 -0700

    [Frontend][VLM] Add support for multiple multi-modal items (vllm-project#8049)

commit 8423aef
Author: Robert Shaw <114415538+robertgshaw2-neuralmagic@users.noreply.github.com>
Date:   Sat Aug 31 15:44:03 2024 -0400

    [BugFix][Core] Multistep Fix Crash on Request Cancellation (vllm-project#8059)

commit 4f5d844
Author: Nicolò Lucchesi <nicolo.lucchesi@gmail.com>
Date:   Sat Aug 31 09:27:58 2024 +0200

    [Bugfix] Fix ModelScope models in v0.5.5 (vllm-project#8037)

commit d05f0a9
Author: Cyrus Leung <tlleungac@connect.ust.hk>
Date:   Sat Aug 31 13:26:55 2024 +0800

    [Bugfix] Fix import error in Phi-3.5-MoE (vllm-project#8052)

commit 622f8ab
Author: Pavani Majety <pmajety@nvidia.com>
Date:   Fri Aug 30 22:18:50 2024 -0700

    [Bugfix] bugfix and add model test for flashinfer fp8 kv cache. (vllm-project#8013)

commit 1248e85
Author: Wenxiang <8460860+wenxcs@users.noreply.github.com>
Date:   Sat Aug 31 03:42:57 2024 +0800

    [Model] Adding support for MSFT Phi-3.5-MoE (vllm-project#7729)

    Co-authored-by: Your Name <you@example.com>
    Co-authored-by: Zeqi Lin <zelin@microsoft.com>
    Co-authored-by: Zeqi Lin <Zeqi.Lin@microsoft.com>

commit 2684efc
Author: Woosuk Kwon <woosuk.kwon@berkeley.edu>
Date:   Fri Aug 30 09:01:26 2024 -0700

    [TPU][Bugfix] Fix tpu type api (vllm-project#8035)

commit 058344f
Author: Kaunil Dhruv <dhruv.kaunil@gmail.com>
Date:   Fri Aug 30 08:21:02 2024 -0700

    [Frontend]-config-cli-args (vllm-project#7737)

    Co-authored-by: Cyrus Leung <cyrus.tl.leung@gmail.com>
    Co-authored-by: Kaunil Dhruv <kaunil_dhruv@intuit.com>

commit 98cef6a
Author: Cyrus Leung <tlleungac@connect.ust.hk>
Date:   Fri Aug 30 23:20:34 2024 +0800

    [Core] Increase default `max_num_batched_tokens` for multimodal models (vllm-project#8028)

commit f97be32
Author: Jungho Christopher Cho <wjdgh6655@gmail.com>
Date:   Sat Aug 31 00:19:27 2024 +0900

    [VLM][Model] TP support for ViTs (vllm-project#7186)

    Co-authored-by: Roger Wang <136131678+ywang96@users.noreply.github.com>
    Co-authored-by: Roger Wang <ywang@roblox.com>

commit afd39a4
Author: Cyrus Leung <tlleungac@connect.ust.hk>
Date:   Fri Aug 30 23:03:28 2024 +0800

    [Bugfix] Fix import error in Exaone model (vllm-project#8034)

commit 2148441
Author: Richard Liu <39319471+richardsliu@users.noreply.github.com>
Date:   Fri Aug 30 00:27:40 2024 -0700

    [TPU] Support single and multi-host TPUs on GKE (vllm-project#7613)

commit dc13e99
Author: Yohan Na <nayohan13@gmail.com>
Date:   Fri Aug 30 15:34:20 2024 +0900

    [MODEL] add Exaone model support (vllm-project#7819)

commit 34a0e96
Author: Avshalom Manevich <12231371+avshalomman@users.noreply.github.com>
Date:   Fri Aug 30 11:11:39 2024 +0700

    [Kernel] changing fused moe kernel chunk size default to 32k (vllm-project#7995)

commit 80c7b08
Author: Woosuk Kwon <woosuk.kwon@berkeley.edu>
Date:   Thu Aug 29 19:35:29 2024 -0700

    [TPU] Async output processing for TPU (vllm-project#8011)

commit 428dd14
Author: afeldman-nm <156691304+afeldman-nm@users.noreply.github.com>
Date:   Thu Aug 29 22:19:08 2024 -0400

    [Core] Logprobs support in Multi-step (vllm-project#7652)

commit 4abed65
Author: Cyrus Leung <tlleungac@connect.ust.hk>
Date:   Fri Aug 30 08:49:04 2024 +0800

    [VLM] Disallow overflowing `max_model_len` for multimodal models (vllm-project#7998)

commit 0c785d3
Author: Wei-Sheng Chin <wechi@microsoft.com>
Date:   Thu Aug 29 16:48:11 2024 -0700

    Add more percentiles and latencies (vllm-project#7759)

commit 4664cea
Author: chenqianfzh <51831990+chenqianfzh@users.noreply.github.com>
Date:   Thu Aug 29 16:09:08 2024 -0700

    support bitsandbytes 8-bit and FP4 quantized models (vllm-project#7445)

commit 257afc3
Author: Harsha vardhan manoj Bikki <39381063+hbikki@users.noreply.github.com>
Date:   Thu Aug 29 13:58:14 2024 -0700

    [Neuron] Adding support for context-lenght, token-gen buckets. (vllm-project#7885)

    Co-authored-by: Harsha Bikki <harbikh@amazon.com>

commit 86a677d
Author: Dipika Sikka <dipikasikka1@gmail.com>
Date:   Thu Aug 29 16:46:55 2024 -0400

    [misc] update tpu int8 to use new vLLM Parameters (vllm-project#7973)

commit d78789a
Author: Isotr0py <2037008807@qq.com>
Date:   Fri Aug 30 03:54:49 2024 +0800

    [Bugfix] Fix incorrect vocal embedding shards for GGUF model in tensor parallelism (vllm-project#7954)

commit c334b18
Author: kushanam <42385577+kushanam@users.noreply.github.com>
Date:   Thu Aug 29 12:15:04 2024 -0700

    extend cuda graph size for H200 (vllm-project#7894)

    Co-authored-by: youkaichao <youkaichao@126.com>

commit 6b34215
Author: Pavani Majety <pavanimajety@gmail.com>
Date:   Thu Aug 29 11:53:11 2024 -0700

    [Core][Kernels] Enable FP8 KV Cache with Flashinfer backend.  + BugFix for kv_cache_dtype=auto (vllm-project#7985)

    Co-authored-by: Simon Mo <simon.mo@hey.com>
    Co-authored-by: Cody Yu <hao.yu.cody@gmail.com>

commit 3f60f22
Author: Alexander Matveev <59768536+alexm-neuralmagic@users.noreply.github.com>
Date:   Thu Aug 29 14:18:26 2024 -0400

    [Core] Combine async postprocessor and multi-step (vllm-project#7921)

commit f205c09
Author: Jonas M. Kübler <44084297+jmkuebler@users.noreply.github.com>
Date:   Thu Aug 29 07:18:13 2024 +0200

    [Bugfix] Unify rank computation across regular decoding and speculative decoding (vllm-project#7899)

commit ef99a78
Author: youkaichao <youkaichao@gmail.com>
Date:   Wed Aug 28 21:27:06 2024 -0700

    Revert "[Core][Kernels] Use FlashInfer backend for FP8 KV Cache when available." (vllm-project#7982)

commit 74d5543
Author: Peter Salas <peter@fixie.ai>
Date:   Wed Aug 28 20:24:31 2024 -0700

    [VLM][Core] Fix exceptions on ragged NestedTensors (vllm-project#7974)

commit a7f65c2
Author: youkaichao <youkaichao@gmail.com>
Date:   Wed Aug 28 17:32:26 2024 -0700

    [torch.compile] remove reset (vllm-project#7975)

commit 4289cad
Author: Nick Hill <nickhill@us.ibm.com>
Date:   Wed Aug 28 17:22:43 2024 -0700

    [Frontend] Minor optimizations to zmq decoupled front-end (vllm-project#7957)

    Co-authored-by: Robert Shaw <rshaw@neuralmagic>

commit af59df0
Author: Michael Goin <michael@neuralmagic.com>
Date:   Wed Aug 28 19:19:17 2024 -0400

    Remove faulty Meta-Llama-3-8B-Instruct-FP8.yaml lm-eval test (vllm-project#7961)

commit ce6bf3a
Author: youkaichao <youkaichao@gmail.com>
Date:   Wed Aug 28 16:10:12 2024 -0700

    [torch.compile] avoid Dynamo guard evaluation overhead (vllm-project#7898)

    Co-authored-by: Woosuk Kwon <woosuk.kwon@berkeley.edu>

commit 3cdfe1f
Author: bnellnm <49004751+bnellnm@users.noreply.github.com>
Date:   Wed Aug 28 18:11:49 2024 -0400

    [Bugfix] Make torch registration of punica ops optional (vllm-project#7970)

commit fdd9daa
Author: Mor Zusman <mor.zusmann@gmail.com>
Date:   Thu Aug 29 01:06:52 2024 +0300

    [Kernel/Model] Migrate mamba_ssm and causal_conv1d kernels to vLLM (vllm-project#7651)

commit 8c56e57
Author: Stas Bekman <stas00@users.noreply.github.com>
Date:   Wed Aug 28 13:54:23 2024 -0700

    [Doc] fix 404 link (vllm-project#7966)

commit eeffde1
Author: Woosuk Kwon <woosuk.kwon@berkeley.edu>
Date:   Wed Aug 28 13:10:21 2024 -0700

    [TPU] Upgrade PyTorch XLA nightly (vllm-project#7967)

commit e5697d1
Author: rasmith <Randall.Smith@amd.com>
Date:   Wed Aug 28 14:37:47 2024 -0500

    [Kernel] [Triton] [AMD] Adding Triton implementations awq_dequantize and awq_gemm to support AWQ (vllm-project#7386)

commit b98cc28
Author: Pavani Majety <pavanimajety@gmail.com>
Date:   Wed Aug 28 10:01:22 2024 -0700

    [Core][Kernels] Use FlashInfer backend for FP8 KV Cache when available. (vllm-project#7798)

    Co-authored-by: Simon Mo <simon.mo@hey.com>

commit ef9baee
Author: Cyrus Leung <tlleungac@connect.ust.hk>
Date:   Wed Aug 28 23:11:18 2024 +0800

    [Bugfix][VLM] Fix incompatibility between vllm-project#7902 and vllm-project#7230 (vllm-project#7948)

commit 98c12cf
Author: Stas Bekman <stas00@users.noreply.github.com>
Date:   Wed Aug 28 05:12:32 2024 -0700

    [Doc] fix the autoAWQ example (vllm-project#7937)

commit f52a43a
Author: youkaichao <youkaichao@gmail.com>
Date:   Wed Aug 28 01:27:07 2024 -0700

    [ci][test] fix pp test failure (vllm-project#7945)

commit e358053
Author: Cody Yu <hao.yu.cody@gmail.com>
Date:   Wed Aug 28 00:36:31 2024 -0700

    [Performance] Enable chunked prefill and prefix caching together (vllm-project#7753)
@LiuXiaoxuanPKU LiuXiaoxuanPKU deleted the flashinfer-rejection-sampler branch September 17, 2024 04:29
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
ready ONLY add when PR is ready to merge/full CI is needed
Projects
None yet
Development

Successfully merging this pull request may close these issues.

5 participants