Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add more percentiles and latencies #7759

Merged
merged 6 commits into from
Aug 29, 2024
Merged

Conversation

wschin
Copy link
Contributor

@wschin wschin commented Aug 21, 2024

99-precentile can be far away from median, average, and sometimes even 95-percentile. This PR adds more latency percentiles and end-to-end latency for profiling throughput.

Example usage (see --percentile-metrics and --metric-percentiles):

model_name="meta-llama/Meta-Llama-3.1-70B-Instruct"
python3 benchmark_serving.py \
    --backend vllm \
    --dataset-name sharegpt \
    --dataset-path ShareGPT_V3_unfiltered_cleaned_split.json \
    --model $model_name \
    --num-prompts 20 \
    --endpoint /v1/completions \
    --tokenizer $model_name \
    --save-result \
    --percentile-metrics "ttft,tpot,itl,e2el" \
    --metric-percentiles "30,50,70,90" \
    --port 9487 \
    2>&1 | tee benchmark_serving.txt
bench_serving_exit_code=$?

produces

============ Serving Benchmark Result ============
Successful requests:                     20        
Benchmark duration (s):                  34.57     
Total input tokens:                      3590      
Total generated tokens:                  3927      
Request throughput (req/s):              0.58      
Input token throughput (tok/s):          103.86    
Output token throughput (tok/s):         113.61    
---------------Time to First Token----------------
Mean TTFT (ms):                          280.00    
Median TTFT (ms):                        184.59    
P30 TTFT (ms):                           183.32    
P50 TTFT (ms):                           184.59    
P70 TTFT (ms):                           429.35    
P90 TTFT (ms):                           518.89    
-----Time per Output Token (excl. 1st token)------
Mean TPOT (ms):                          48.98     
Median TPOT (ms):                        48.70     
P30 TPOT (ms):                           47.43     
P50 TPOT (ms):                           48.70     
P70 TPOT (ms):                           49.40     
P90 TPOT (ms):                           50.74     
---------------Inter-token Latency----------------
Mean ITL (ms):                           47.45     
Median ITL (ms):                         46.75     
P30 ITL (ms):                            46.32     
P50 ITL (ms):                            46.75     
P70 ITL (ms):                            47.46     
P90 ITL (ms):                            48.31     
----------------End-to-end Latency----------------
Mean E2EL (ms):                          5670.77   
Median E2EL (ms):                        9628.67   
P30 E2EL (ms):                           2497.91   
P50 E2EL (ms):                           5670.77   
P70 E2EL (ms):                           13700.87  
P90 E2EL (ms):                           20139.72  
==================================================

Copy link

👋 Hi! Thank you for contributing to the vLLM project.
Just a reminder: PRs would not trigger full CI run by default. Instead, it would only run fastcheck CI which consists a small and essential subset of CI tests to quickly catch errors. You can run other CI tests on top of default ones by unblocking the steps in your fast-check build on Buildkite UI.

Once the PR is approved and ready to go, please make sure to run full CI as it is required to merge (or just use auto-merge).

To run full CI, you can do one of these:

  • Comment /ready on the PR
  • Add ready label to the PR
  • Enable auto-merge.

🚀

@wschin
Copy link
Contributor Author

wschin commented Aug 22, 2024

/ready

@github-actions github-actions bot added the ready ONLY add when PR is ready to merge/full CI is needed label Aug 22, 2024
@ywang96
Copy link
Member

ywang96 commented Aug 22, 2024

Hey @wschin! Thank you for making this PR!

I personally don't really mind this change, but this could affect the parsing of our performance benchmark CI. Also, when I initially worked on benchmark_serving.py, one of the design principles I made was to make it easily customizable and portable so that users can adjust this script according to their needs very conveniently without having to make it part of the main branch.

@KuntaiDu What do you think of this change since you would need to rework the parsing in perf CI if we merge this? I'm personally okay with the current benchmark too since median is usually a robust metric itself already.

@comaniac
Copy link
Collaborator

Can we make this optional so that we can still report/parse median values in CI, but users could configure to report more numbers they want? For example we could support an option like --percentiles "25,50,95". Also since everyone may care about different percentiles (e.g. we care P90 more than P95), and this approach offers such flexibility.

@ywang96
Copy link
Member

ywang96 commented Aug 22, 2024

Can we make this optional so that we can still report/parse median values in CI, but users could configure to report more numbers they want? For example we could support an option like --percentiles "25,50,95". Also since everyone may care about different percentiles (e.g. we care P90 more than P95), and this approach offers such flexibility.

That's a good idea too!

@DarkLight1337 DarkLight1337 requested a review from ywang96 August 22, 2024 08:20
@wschin
Copy link
Contributor Author

wschin commented Aug 22, 2024

Sounds good. I will make it optional.

Copy link
Collaborator

@comaniac comaniac left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Overall LGTM. Only comment is we could just call "e2e" or "end to end". Also could you post the example outputs with this PR for illustration? Thanks l

benchmarks/benchmark_serving.py Outdated Show resolved Hide resolved
@wschin
Copy link
Contributor Author

wschin commented Aug 24, 2024

Overall LGTM. Only comment is we could just call "e2e" or "end to end". Also could you post the example outputs with this PR for illustration? Thanks l

Done. Thanks.

@comaniac
Copy link
Collaborator

cc @ywang96 @KuntaiDu

@wschin
Copy link
Contributor Author

wschin commented Aug 26, 2024

@ywang96, @KuntaiDu gentle ping for reviewing. Thanks.

@ywang96
Copy link
Member

ywang96 commented Aug 26, 2024

@wschin Will take a look today - sorry for the delay!

Copy link
Member

@ywang96 ywang96 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thank you for this PR! @wschin

I left a small nit. From a glance this shouldn't affect our perf benchmark parsing given the default values, but I will let @KuntaiDu confirm!

Comment on lines +450 to +480
def process_one_metric(
# E.g., "ttft"
metric_attribute_name: str,
# E.g., "TTFT"
metric_name: str,
# E.g., "Time to First Token"
metric_header: str,
):
# This function print and add statistics of the specified
# metric.
if metric_attribute_name not in selected_percentile_metrics:
return
print("{s:{c}^{n}}".format(s=metric_header, n=50, c='-'))
print("{:<40} {:<10.2f}".format(
f"Mean {metric_name} (ms):",
getattr(metrics, f"mean_{metric_attribute_name}_ms")))
print("{:<40} {:<10.2f}".format(
f"Median {metric_name} (ms):",
getattr(metrics, f"median_{metric_attribute_name}_ms")))
result[f"mean_{metric_attribute_name}_ms"] = getattr(
metrics, f"mean_{metric_attribute_name}_ms")
result[f"median_{metric_attribute_name}_ms"] = getattr(
metrics, f"median_{metric_attribute_name}_ms")
result[f"std_{metric_attribute_name}_ms"] = getattr(
metrics, f"std_{metric_attribute_name}_ms")
for p, value in getattr(metrics,
f"percentiles_{metric_attribute_name}_ms"):
p_word = str(int(p)) if int(p) == p else str(p)
print("{:<40} {:<10.2f}".format(f"P{p_word} {metric_name} (ms):",
value))
result[f"p{p_word}_{metric_attribute_name}_ms"] = value
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for this refactoring!

@@ -765,6 +804,23 @@ def main(args: argparse.Namespace):
"{backend}-{args.request_rate}qps-{base_model_id}-{current_dt}.json"
" format.",
)
parser.add_argument(
"--percentile-metrics",
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

nit: could we have this as simply --metrics? I think having --percentile-metrics and --metric-percentiles together might be confusing to users.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

From the doc string, it's clear what they are. In fact, it's kind of confusing even if we rename it to metrics because there are metrics always computed.

@wschin
Copy link
Contributor Author

wschin commented Aug 29, 2024

I feel lazy about more changes to this PR. Can you please merge to avoid merge conflict?

Copy link
Member

@ywang96 ywang96 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@wschin Let's merge this, thank you for making this contribution!

@ywang96 ywang96 merged commit 0c785d3 into vllm-project:main Aug 29, 2024
30 checks passed
dsikka pushed a commit to neuralmagic/nm-vllm that referenced this pull request Aug 31, 2024
triple-Mu pushed a commit to triple-Mu/vllm_official that referenced this pull request Sep 4, 2024
dsikka pushed a commit to neuralmagic/vllm that referenced this pull request Sep 5, 2024
@KuntaiDu
Copy link
Collaborator

KuntaiDu commented Sep 5, 2024

Sry for not being responsive enough during last week and this week, was working hard on performance benchmark. This PR looks good to me and I will need to make some accommodating changes in various benchmarks.

opus24 added a commit to Hyper-Accel/vllm that referenced this pull request Sep 10, 2024
commit a1d8742
Author: Simon Mo <simon.mo@hey.com>
Date:   Mon Sep 9 23:21:00 2024 -0700

    Add NVIDIA Meetup slides, announce AMD meetup, and add contact info (vllm-project#8319)

commit 6cd5e5b
Author: Dipika Sikka <dipikasikka1@gmail.com>
Date:   Mon Sep 9 23:02:52 2024 -0400

    [Misc] Fused MoE Marlin support for GPTQ (vllm-project#8217)

commit c7cb5c3
Author: Kyle Sayers <kylesayrs@gmail.com>
Date:   Mon Sep 9 16:27:26 2024 -0400

    [Misc] GPTQ Activation Ordering (vllm-project#8135)

commit f9b4a2d
Author: Vladislav Kruglikov <vladislavkruglikov@outlook.com>
Date:   Mon Sep 9 21:20:46 2024 +0300

    [Bugfix] Correct adapter usage for cohere and jamba (vllm-project#8292)

commit 58fcc85
Author: Adam Lugowski <alugowski@gmail.com>
Date:   Mon Sep 9 11:16:37 2024 -0700

    [Frontend] Add progress reporting to run_batch.py (vllm-project#8060)

    Co-authored-by: Adam Lugowski <adam.lugowski@parasail.io>

commit 08287ef
Author: Kyle Mistele <kyle@mistele.com>
Date:   Mon Sep 9 09:45:11 2024 -0500

    [Bugfix] Streamed tool calls now more strictly follow OpenAI's format; ensures Vercel AI SDK compatibility (vllm-project#8272)

commit 4ef41b8
Author: Alexander Matveev <59768536+alexm-neuralmagic@users.noreply.github.com>
Date:   Sun Sep 8 00:01:51 2024 -0400

    [Bugfix] Fix async postprocessor in case of preemption (vllm-project#8267)

commit cfe712b
Author: Joe Runde <Joseph.Runde@ibm.com>
Date:   Sat Sep 7 14:03:16 2024 -0600

    [CI/Build] Use python 3.12 in cuda image (vllm-project#8133)

    Signed-off-by: Joe Runde <Joseph.Runde@ibm.com>

commit b962ee1
Author: sumitd2 <91451282+sumitd2@users.noreply.github.com>
Date:   Sat Sep 7 23:48:40 2024 +0530

    ppc64le: Dockerfile fixed, and a script for buildkite (vllm-project#8026)

commit 36bf815
Author: Isotr0py <2037008807@qq.com>
Date:   Sun Sep 8 01:45:44 2024 +0800

    [Model][VLM] Decouple weight loading logic for `Paligemma` (vllm-project#8269)

commit e807125
Author: Isotr0py <2037008807@qq.com>
Date:   Sat Sep 7 16:38:23 2024 +0800

    [Model][VLM] Support multi-images inputs for InternVL2 models (vllm-project#8201)

commit 9f68e00
Author: Cyrus Leung <tlleungac@connect.ust.hk>
Date:   Sat Sep 7 16:02:39 2024 +0800

    [Bugfix] Fix broken OpenAI tensorizer test (vllm-project#8258)

commit ce2702a
Author: youkaichao <youkaichao@gmail.com>
Date:   Fri Sep 6 22:40:46 2024 -0700

    [tpu][misc] fix typo (vllm-project#8260)

commit 795b662
Author: Wei-Sheng Chin <wschin@outlook.com>
Date:   Fri Sep 6 20:18:16 2024 -0700

    Enable Random Prefix Caching in Serving Profiling Tool (benchmark_serving.py) (vllm-project#8241)

commit 2f707fc
Author: Cyrus Leung <tlleungac@connect.ust.hk>
Date:   Sat Sep 7 10:57:24 2024 +0800

    [Model] Multi-input support for LLaVA (vllm-project#8238)

commit 41e95c5
Author: Kyle Mistele <kyle@mistele.com>
Date:   Fri Sep 6 21:49:01 2024 -0500

    [Bugfix] Fix Hermes tool call chat template bug (vllm-project#8256)

    Co-authored-by: Kyle Mistele <kyle@constellate.ai>

commit 12dd715
Author: William Lin <SolitaryThinker@users.noreply.github.com>
Date:   Fri Sep 6 17:48:48 2024 -0700

    [misc] [doc] [frontend] LLM torch profiler support (vllm-project#7943)

commit 29f49cd
Author: Patrick von Platen <patrick.v.platen@gmail.com>
Date:   Sat Sep 7 01:02:05 2024 +0200

    [Model] Allow loading from original Mistral format (vllm-project#8168)

    Co-authored-by: Michael Goin <michael@neuralmagic.com>

commit 23f3222
Author: Dipika Sikka <dipikasikka1@gmail.com>
Date:   Fri Sep 6 18:29:03 2024 -0400

    [Misc] Remove `SqueezeLLM` (vllm-project#8220)

commit 9db52ea
Author: rasmith <Randall.Smith@amd.com>
Date:   Fri Sep 6 17:26:09 2024 -0500

    [Kernel] [Triton] Memory optimization for awq_gemm and awq_dequantize, 2x throughput (vllm-project#8248)

commit 1447c97
Author: Alexey Kondratiev(AMD) <143633163+alexeykondrat@users.noreply.github.com>
Date:   Fri Sep 6 14:51:03 2024 -0400

    [CI/Build] Increasing timeout for multiproc worker tests (vllm-project#8203)

commit de80783
Author: Rui Qiao <161574667+ruisearch42@users.noreply.github.com>
Date:   Fri Sep 6 09:18:35 2024 -0700

    [Misc] Use ray[adag] dependency instead of cuda (vllm-project#7938)

commit e5cab71
Author: afeldman-nm <156691304+afeldman-nm@users.noreply.github.com>
Date:   Fri Sep 6 12:01:14 2024 -0400

    [Frontend] Add --logprobs argument to `benchmark_serving.py` (vllm-project#8191)

commit baa5467
Author: Nick Hill <nickhill@us.ibm.com>
Date:   Thu Sep 5 20:39:29 2024 -0700

    [BugFix] Fix Granite model configuration (vllm-project#8216)

commit db3bf7c
Author: Jiaxin Shan <seedjeffwan@gmail.com>
Date:   Thu Sep 5 18:10:33 2024 -0700

    [Core] Support load and unload LoRA in api server (vllm-project#6566)

    Co-authored-by: Jee Jee Li <pandaleefree@gmail.com>

commit 2febcf2
Author: sroy745 <142070531+sroy745@users.noreply.github.com>
Date:   Thu Sep 5 13:25:29 2024 -0700

    [Documentation][Spec Decode] Add documentation about lossless guarantees in Speculative Decoding in vLLM (vllm-project#7962)

commit 2ee4528
Author: Michael Goin <michael@neuralmagic.com>
Date:   Thu Sep 5 11:09:46 2024 -0400

    Move verify_marlin_supported to GPTQMarlinLinearMethod (vllm-project#8165)

commit 9da25a8
Author: Alex Brooks <alex.brooks@ibm.com>
Date:   Thu Sep 5 06:48:10 2024 -0600

    [MODEL] Qwen Multimodal Support (Qwen-VL / Qwen-VL-Chat) (vllm-project#8029)

    Signed-off-by: Alex-Brooks <Alex.Brooks@ibm.com>
    Co-authored-by: DarkLight1337 <tlleungac@connect.ust.hk>

commit 8685ba1
Author: manikandan.tm@zucisystems.com <94887255+Manikandan-Thangaraj-ZS0321@users.noreply.github.com>
Date:   Thu Sep 5 17:03:37 2024 +0530

    Inclusion of InternVLChatModel In PP_SUPPORTED_MODELS(Pipeline Parallelism) (vllm-project#7860)

commit 288a938
Author: Cyrus Leung <tlleungac@connect.ust.hk>
Date:   Thu Sep 5 18:51:53 2024 +0800

    [Doc] Indicate more information about supported modalities (vllm-project#8181)

commit e39ebf5
Author: Elfie Guo <164945471+elfiegg@users.noreply.github.com>
Date:   Wed Sep 4 22:12:26 2024 -0700

    [Core/Bugfix] Add query dtype as per FlashInfer API requirements. (vllm-project#8173)

commit ba262c4
Author: Kevin H. Luu <kevin@anyscale.com>
Date:   Wed Sep 4 20:33:12 2024 -0700

    [ci] Mark LoRA test as soft-fail (vllm-project#8160)

    Signed-off-by: kevin <kevin@anyscale.com>

commit 4624d98
Author: Woosuk Kwon <woosuk.kwon@berkeley.edu>
Date:   Wed Sep 4 20:31:48 2024 -0700

    [Misc] Clean up RoPE forward_native (vllm-project#8076)

commit 1afc931
Author: William Lin <SolitaryThinker@users.noreply.github.com>
Date:   Wed Sep 4 17:35:36 2024 -0700

    [bugfix] >1.43 constraint for openai (vllm-project#8169)

    Co-authored-by: Michael Goin <michael@neuralmagic.com>

commit e01c2be
Author: Maureen McElaney <mmcelaney@users.noreply.github.com>
Date:   Wed Sep 4 19:50:13 2024 -0400

    [Doc] [Misc] Create CODE_OF_CONDUCT.md (vllm-project#8161)

commit 32e7db2
Author: Simon Mo <simon.mo@hey.com>
Date:   Wed Sep 4 16:34:27 2024 -0700

    Bump version to v0.6.0 (vllm-project#8166)

commit 008cf88
Author: Harsha vardhan manoj Bikki <39381063+hbikki@users.noreply.github.com>
Date:   Wed Sep 4 16:33:43 2024 -0700

    [Neuron] Adding support for adding/ overriding neuron configuration a… (vllm-project#8062)

    Co-authored-by: Harsha Bikki <harbikh@amazon.com>

commit 77d9e51
Author: Cody Yu <hao.yu.cody@gmail.com>
Date:   Wed Sep 4 13:23:22 2024 -0700

    [MISC] Replace input token throughput with total token throughput (vllm-project#8164)

    Co-authored-by: Michael Goin <michael@neuralmagic.com>

commit e02ce49
Author: Kyle Mistele <kyle@mistele.com>
Date:   Wed Sep 4 15:18:13 2024 -0500

    [Feature] OpenAI-Compatible Tools API + Streaming for Hermes & Mistral models (vllm-project#5649)

    Co-authored-by: constellate <constellate@1-ai-appserver-staging.codereach.com>
    Co-authored-by: Kyle Mistele <kyle@constellate.ai>

commit 561d6f8
Author: Woosuk Kwon <woosuk.kwon@berkeley.edu>
Date:   Wed Sep 4 13:05:50 2024 -0700

    [CI] Change test input in Gemma LoRA test (vllm-project#8163)

commit d1dec64
Author: alexeykondrat <143633163+alexeykondrat@users.noreply.github.com>
Date:   Wed Sep 4 14:57:54 2024 -0400

    [CI/Build][ROCm] Enabling LoRA tests on ROCm (vllm-project#7369)

    Co-authored-by: Simon Mo <simon.mo@hey.com>

commit 2ad2e56
Author: Cody Yu <hao.yu.cody@gmail.com>
Date:   Wed Sep 4 11:53:25 2024 -0700

    [MISC] Consolidate FP8 kv-cache tests (vllm-project#8131)

commit d331156
Author: wnma <wnma3mz@gmail.com>
Date:   Wed Sep 4 18:55:37 2024 +0800

    [Bugfix] remove post_layernorm in siglip (vllm-project#8106)

commit ccd7207
Author: TimWang <7367474+haitwang-cloud@users.noreply.github.com>
Date:   Wed Sep 4 14:17:05 2024 +0800

    chore: Update check-wheel-size.py to read MAX_SIZE_MB from env (vllm-project#8103)

commit 855c262
Author: Cyrus Leung <tlleungac@connect.ust.hk>
Date:   Wed Sep 4 13:22:17 2024 +0800

    [Frontend] Multimodal support in offline chat (vllm-project#8098)

commit 2be8ec6
Author: Peter Salas <peter@fixie.ai>
Date:   Tue Sep 3 21:38:21 2024 -0700

    [Model] Add Ultravox support for multiple audio chunks (vllm-project#7963)

commit e16fa99
Author: Dipika Sikka <dipikasikka1@gmail.com>
Date:   Tue Sep 3 22:12:41 2024 -0400

    [Misc] Update fbgemmfp8 to use `vLLMParameters` (vllm-project#7972)

    Co-authored-by: Michael Goin <michael@neuralmagic.com>

commit 61f4a93
Author: Woosuk Kwon <woosuk.kwon@berkeley.edu>
Date:   Tue Sep 3 18:35:33 2024 -0700

    [TPU][Bugfix] Use XLA rank for persistent cache path (vllm-project#8137)

commit d4db9f5
Author: Nick Hill <nickhill@us.ibm.com>
Date:   Tue Sep 3 17:57:41 2024 -0700

    [Benchmark] Add `--async-engine` option to benchmark_throughput.py (vllm-project#7964)

commit 2188a60
Author: Dipika Sikka <dipikasikka1@gmail.com>
Date:   Tue Sep 3 17:21:44 2024 -0400

    [Misc] Update `GPTQ` to use `vLLMParameters` (vllm-project#7976)

commit dc0b606
Author: Simon Mo <simon.mo@hey.com>
Date:   Tue Sep 3 14:11:42 2024 -0700

    [CI] Change PR remainder to avoid at-mentions (vllm-project#8134)

commit 0af3abe
Author: Woosuk Kwon <woosuk.kwon@berkeley.edu>
Date:   Tue Sep 3 13:29:24 2024 -0700

    [TPU][Bugfix] Fix next_token_ids shape (vllm-project#8128)

commit f1575dc
Author: Kevin H. Luu <kevin@anyscale.com>
Date:   Tue Sep 3 13:25:09 2024 -0700

    [ci] Fix GHA workflow  (vllm-project#8129)

    Signed-off-by: kevin <kevin@anyscale.com>

commit c02638e
Author: tomeras91 <57313761+tomeras91@users.noreply.github.com>
Date:   Tue Sep 3 22:37:08 2024 +0300

    [CI/Build] make pip install vllm work in macos (for import only) (vllm-project#8118)

commit 652c83b
Author: Antoni Baum <antoni.baum@protonmail.com>
Date:   Tue Sep 3 12:28:25 2024 -0700

    [Misc] Raise a more informative exception in add/remove_logger (vllm-project#7750)

commit 6d646d0
Author: Alexander Matveev <59768536+alexm-neuralmagic@users.noreply.github.com>
Date:   Tue Sep 3 14:50:29 2024 -0400

    [Core] Optimize Async + Multi-step (vllm-project#8050)

commit 95a178f
Author: Kevin H. Luu <kevin@anyscale.com>
Date:   Tue Sep 3 11:32:27 2024 -0700

    [CI] Only PR reviewers/committers can trigger CI on PR (vllm-project#8124)

    Signed-off-by: kevin <kevin@anyscale.com>

commit bd852f2
Author: Cody Yu <hao.yu.cody@gmail.com>
Date:   Tue Sep 3 10:49:18 2024 -0700

    [Performance] Enable chunked prefill and prefix caching together (vllm-project#8120)

    Co-authored-by: Tao He <sighingnow@gmail.com>
    Co-authored-by: Juelianqvq <Juelianqvq@noreply.github.com>

commit ec26653
Author: Isotr0py <2037008807@qq.com>
Date:   Tue Sep 3 21:37:52 2024 +0800

    [Bugfix][VLM] Add fallback to SDPA for ViT model running on CPU backend (vllm-project#8061)

commit 0fbc669
Author: Woosuk Kwon <woosuk.kwon@berkeley.edu>
Date:   Mon Sep 2 20:35:42 2024 -0700

    [Bugfix] Fix single output condition in output processor (vllm-project#7881)

commit 6e36f4f
Author: wang.yuqi <noooop@126.com>
Date:   Tue Sep 3 05:20:12 2024 +0800

    improve chunked prefill performance

    [Bugfix] Fix vllm-project#7592 vllm 0.5.4 enable_chunked_prefill throughput is slightly lower than 0.5.3~0.5.0. (vllm-project#7874)

commit dd2a6a8
Author: Isotr0py <2037008807@qq.com>
Date:   Mon Sep 2 23:48:56 2024 +0800

    [Bugfix] Fix internlm2 tensor parallel inference (vllm-project#8055)

commit 4ca65a9
Author: Isotr0py <2037008807@qq.com>
Date:   Mon Sep 2 20:43:26 2024 +0800

    [Core][Bugfix] Accept GGUF model without .gguf extension (vllm-project#8056)

commit e2b2aa5
Author: Woosuk Kwon <woosuk.kwon@berkeley.edu>
Date:   Sun Sep 1 23:09:46 2024 -0700

    [TPU] Align worker index with node boundary (vllm-project#7932)

commit e6a26ed
Author: Lily Liu <lilyliupku@gmail.com>
Date:   Sun Sep 1 21:23:29 2024 -0700

    [SpecDecode][Kernel] Flashinfer Rejection Sampling (vllm-project#7244)

commit f8d6014
Author: Shawn Tan <shawn@wtf.sg>
Date:   Sun Sep 1 21:37:18 2024 -0400

    [Model] Add Granite model (vllm-project#7436)

    Co-authored-by: Nick Hill <nickhill@us.ibm.com>

commit 5b86b19
Author: Roger Wang <136131678+ywang96@users.noreply.github.com>
Date:   Sun Sep 1 14:46:57 2024 -0700

    [Misc] Optional installation of audio related packages (vllm-project#8063)

commit 5231f08
Author: Roger Wang <136131678+ywang96@users.noreply.github.com>
Date:   Sat Aug 31 16:35:53 2024 -0700

    [Frontend][VLM] Add support for multiple multi-modal items (vllm-project#8049)

commit 8423aef
Author: Robert Shaw <114415538+robertgshaw2-neuralmagic@users.noreply.github.com>
Date:   Sat Aug 31 15:44:03 2024 -0400

    [BugFix][Core] Multistep Fix Crash on Request Cancellation (vllm-project#8059)

commit 4f5d844
Author: Nicolò Lucchesi <nicolo.lucchesi@gmail.com>
Date:   Sat Aug 31 09:27:58 2024 +0200

    [Bugfix] Fix ModelScope models in v0.5.5 (vllm-project#8037)

commit d05f0a9
Author: Cyrus Leung <tlleungac@connect.ust.hk>
Date:   Sat Aug 31 13:26:55 2024 +0800

    [Bugfix] Fix import error in Phi-3.5-MoE (vllm-project#8052)

commit 622f8ab
Author: Pavani Majety <pmajety@nvidia.com>
Date:   Fri Aug 30 22:18:50 2024 -0700

    [Bugfix] bugfix and add model test for flashinfer fp8 kv cache. (vllm-project#8013)

commit 1248e85
Author: Wenxiang <8460860+wenxcs@users.noreply.github.com>
Date:   Sat Aug 31 03:42:57 2024 +0800

    [Model] Adding support for MSFT Phi-3.5-MoE (vllm-project#7729)

    Co-authored-by: Your Name <you@example.com>
    Co-authored-by: Zeqi Lin <zelin@microsoft.com>
    Co-authored-by: Zeqi Lin <Zeqi.Lin@microsoft.com>

commit 2684efc
Author: Woosuk Kwon <woosuk.kwon@berkeley.edu>
Date:   Fri Aug 30 09:01:26 2024 -0700

    [TPU][Bugfix] Fix tpu type api (vllm-project#8035)

commit 058344f
Author: Kaunil Dhruv <dhruv.kaunil@gmail.com>
Date:   Fri Aug 30 08:21:02 2024 -0700

    [Frontend]-config-cli-args (vllm-project#7737)

    Co-authored-by: Cyrus Leung <cyrus.tl.leung@gmail.com>
    Co-authored-by: Kaunil Dhruv <kaunil_dhruv@intuit.com>

commit 98cef6a
Author: Cyrus Leung <tlleungac@connect.ust.hk>
Date:   Fri Aug 30 23:20:34 2024 +0800

    [Core] Increase default `max_num_batched_tokens` for multimodal models (vllm-project#8028)

commit f97be32
Author: Jungho Christopher Cho <wjdgh6655@gmail.com>
Date:   Sat Aug 31 00:19:27 2024 +0900

    [VLM][Model] TP support for ViTs (vllm-project#7186)

    Co-authored-by: Roger Wang <136131678+ywang96@users.noreply.github.com>
    Co-authored-by: Roger Wang <ywang@roblox.com>

commit afd39a4
Author: Cyrus Leung <tlleungac@connect.ust.hk>
Date:   Fri Aug 30 23:03:28 2024 +0800

    [Bugfix] Fix import error in Exaone model (vllm-project#8034)

commit 2148441
Author: Richard Liu <39319471+richardsliu@users.noreply.github.com>
Date:   Fri Aug 30 00:27:40 2024 -0700

    [TPU] Support single and multi-host TPUs on GKE (vllm-project#7613)

commit dc13e99
Author: Yohan Na <nayohan13@gmail.com>
Date:   Fri Aug 30 15:34:20 2024 +0900

    [MODEL] add Exaone model support (vllm-project#7819)

commit 34a0e96
Author: Avshalom Manevich <12231371+avshalomman@users.noreply.github.com>
Date:   Fri Aug 30 11:11:39 2024 +0700

    [Kernel] changing fused moe kernel chunk size default to 32k (vllm-project#7995)

commit 80c7b08
Author: Woosuk Kwon <woosuk.kwon@berkeley.edu>
Date:   Thu Aug 29 19:35:29 2024 -0700

    [TPU] Async output processing for TPU (vllm-project#8011)

commit 428dd14
Author: afeldman-nm <156691304+afeldman-nm@users.noreply.github.com>
Date:   Thu Aug 29 22:19:08 2024 -0400

    [Core] Logprobs support in Multi-step (vllm-project#7652)

commit 4abed65
Author: Cyrus Leung <tlleungac@connect.ust.hk>
Date:   Fri Aug 30 08:49:04 2024 +0800

    [VLM] Disallow overflowing `max_model_len` for multimodal models (vllm-project#7998)

commit 0c785d3
Author: Wei-Sheng Chin <wechi@microsoft.com>
Date:   Thu Aug 29 16:48:11 2024 -0700

    Add more percentiles and latencies (vllm-project#7759)

commit 4664cea
Author: chenqianfzh <51831990+chenqianfzh@users.noreply.github.com>
Date:   Thu Aug 29 16:09:08 2024 -0700

    support bitsandbytes 8-bit and FP4 quantized models (vllm-project#7445)

commit 257afc3
Author: Harsha vardhan manoj Bikki <39381063+hbikki@users.noreply.github.com>
Date:   Thu Aug 29 13:58:14 2024 -0700

    [Neuron] Adding support for context-lenght, token-gen buckets. (vllm-project#7885)

    Co-authored-by: Harsha Bikki <harbikh@amazon.com>

commit 86a677d
Author: Dipika Sikka <dipikasikka1@gmail.com>
Date:   Thu Aug 29 16:46:55 2024 -0400

    [misc] update tpu int8 to use new vLLM Parameters (vllm-project#7973)

commit d78789a
Author: Isotr0py <2037008807@qq.com>
Date:   Fri Aug 30 03:54:49 2024 +0800

    [Bugfix] Fix incorrect vocal embedding shards for GGUF model in tensor parallelism (vllm-project#7954)

commit c334b18
Author: kushanam <42385577+kushanam@users.noreply.github.com>
Date:   Thu Aug 29 12:15:04 2024 -0700

    extend cuda graph size for H200 (vllm-project#7894)

    Co-authored-by: youkaichao <youkaichao@126.com>

commit 6b34215
Author: Pavani Majety <pavanimajety@gmail.com>
Date:   Thu Aug 29 11:53:11 2024 -0700

    [Core][Kernels] Enable FP8 KV Cache with Flashinfer backend.  + BugFix for kv_cache_dtype=auto (vllm-project#7985)

    Co-authored-by: Simon Mo <simon.mo@hey.com>
    Co-authored-by: Cody Yu <hao.yu.cody@gmail.com>

commit 3f60f22
Author: Alexander Matveev <59768536+alexm-neuralmagic@users.noreply.github.com>
Date:   Thu Aug 29 14:18:26 2024 -0400

    [Core] Combine async postprocessor and multi-step (vllm-project#7921)

commit f205c09
Author: Jonas M. Kübler <44084297+jmkuebler@users.noreply.github.com>
Date:   Thu Aug 29 07:18:13 2024 +0200

    [Bugfix] Unify rank computation across regular decoding and speculative decoding (vllm-project#7899)

commit ef99a78
Author: youkaichao <youkaichao@gmail.com>
Date:   Wed Aug 28 21:27:06 2024 -0700

    Revert "[Core][Kernels] Use FlashInfer backend for FP8 KV Cache when available." (vllm-project#7982)

commit 74d5543
Author: Peter Salas <peter@fixie.ai>
Date:   Wed Aug 28 20:24:31 2024 -0700

    [VLM][Core] Fix exceptions on ragged NestedTensors (vllm-project#7974)

commit a7f65c2
Author: youkaichao <youkaichao@gmail.com>
Date:   Wed Aug 28 17:32:26 2024 -0700

    [torch.compile] remove reset (vllm-project#7975)

commit 4289cad
Author: Nick Hill <nickhill@us.ibm.com>
Date:   Wed Aug 28 17:22:43 2024 -0700

    [Frontend] Minor optimizations to zmq decoupled front-end (vllm-project#7957)

    Co-authored-by: Robert Shaw <rshaw@neuralmagic>

commit af59df0
Author: Michael Goin <michael@neuralmagic.com>
Date:   Wed Aug 28 19:19:17 2024 -0400

    Remove faulty Meta-Llama-3-8B-Instruct-FP8.yaml lm-eval test (vllm-project#7961)

commit ce6bf3a
Author: youkaichao <youkaichao@gmail.com>
Date:   Wed Aug 28 16:10:12 2024 -0700

    [torch.compile] avoid Dynamo guard evaluation overhead (vllm-project#7898)

    Co-authored-by: Woosuk Kwon <woosuk.kwon@berkeley.edu>

commit 3cdfe1f
Author: bnellnm <49004751+bnellnm@users.noreply.github.com>
Date:   Wed Aug 28 18:11:49 2024 -0400

    [Bugfix] Make torch registration of punica ops optional (vllm-project#7970)

commit fdd9daa
Author: Mor Zusman <mor.zusmann@gmail.com>
Date:   Thu Aug 29 01:06:52 2024 +0300

    [Kernel/Model] Migrate mamba_ssm and causal_conv1d kernels to vLLM (vllm-project#7651)

commit 8c56e57
Author: Stas Bekman <stas00@users.noreply.github.com>
Date:   Wed Aug 28 13:54:23 2024 -0700

    [Doc] fix 404 link (vllm-project#7966)

commit eeffde1
Author: Woosuk Kwon <woosuk.kwon@berkeley.edu>
Date:   Wed Aug 28 13:10:21 2024 -0700

    [TPU] Upgrade PyTorch XLA nightly (vllm-project#7967)

commit e5697d1
Author: rasmith <Randall.Smith@amd.com>
Date:   Wed Aug 28 14:37:47 2024 -0500

    [Kernel] [Triton] [AMD] Adding Triton implementations awq_dequantize and awq_gemm to support AWQ (vllm-project#7386)

commit b98cc28
Author: Pavani Majety <pavanimajety@gmail.com>
Date:   Wed Aug 28 10:01:22 2024 -0700

    [Core][Kernels] Use FlashInfer backend for FP8 KV Cache when available. (vllm-project#7798)

    Co-authored-by: Simon Mo <simon.mo@hey.com>

commit ef9baee
Author: Cyrus Leung <tlleungac@connect.ust.hk>
Date:   Wed Aug 28 23:11:18 2024 +0800

    [Bugfix][VLM] Fix incompatibility between vllm-project#7902 and vllm-project#7230 (vllm-project#7948)

commit 98c12cf
Author: Stas Bekman <stas00@users.noreply.github.com>
Date:   Wed Aug 28 05:12:32 2024 -0700

    [Doc] fix the autoAWQ example (vllm-project#7937)

commit f52a43a
Author: youkaichao <youkaichao@gmail.com>
Date:   Wed Aug 28 01:27:07 2024 -0700

    [ci][test] fix pp test failure (vllm-project#7945)

commit e358053
Author: Cody Yu <hao.yu.cody@gmail.com>
Date:   Wed Aug 28 00:36:31 2024 -0700

    [Performance] Enable chunked prefill and prefix caching together (vllm-project#7753)
Jeffwan pushed a commit to aibrix/vllm that referenced this pull request Sep 19, 2024
siddharth9820 pushed a commit to axonn-ai/vllm that referenced this pull request Sep 30, 2024
Alvant pushed a commit to compressa-ai/vllm that referenced this pull request Oct 26, 2024
Signed-off-by: Alvant <alvasian@yandex.ru>
KuntaiDu pushed a commit to KuntaiDu/vllm that referenced this pull request Nov 20, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
ready ONLY add when PR is ready to merge/full CI is needed
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants