-
-
Notifications
You must be signed in to change notification settings - Fork 8.4k
[Misc]add coding benchmark for speculative decoding #15303
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[Misc]add coding benchmark for speculative decoding #15303
Conversation
👋 Hi! Thank you for contributing to the vLLM project. 💬 Join our developer Slack at https://slack.vllm.ai to discuss your PR in #pr-reviews, coordinate on features in #feat- channels, or join special interest groups in #sig- channels. Just a reminder: PRs would not trigger full CI run by default. Instead, it would only run Once the PR is approved and ready to go, your PR reviewer(s) can run CI to test the changes comprehensively before merging. To run CI, PR reviewers can either: Add 🚀 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thank you and LGTM!
For your references, here are the statistics about the input / output lengths (number of tokens) in this dataset. Not sure whether you want to change DEFAULT_OUTPUT_LEN accordingly
avg | min | max | |
---|---|---|---|
instruction + input | 151 | 15 | 837 |
output | 179 | 9 | 1317 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM, @ywang96 please also take a look in case I miss anything!
Also, I'm just wondering if it's possible to share some simple benchmark results on the instructcoder dataset, really appreciate it!
@LiuXiaoxuanPKU i updated the benchmark, but looks like the throughput regressed a bit. I also shared my command do you think we should use low batch to benchmark or this looks good to you? |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM! I left some nits.
Can you also share a example benchmark command with benchmark_serving.py
and its result? Thanks!
@ywang96 posted the command and the results in the pr description |
@JenZhao updated the code |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM! Thanks for the contribution!
Can you fix the pre-commit errors so we can merge it?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
thank you!
Signed-off-by: CXIAAAAA <cxia0209@gmail.com>
Signed-off-by: CXIAAAAA <cxia0209@gmail.com>
Signed-off-by: CXIAAAAA <cxia0209@gmail.com>
Signed-off-by: CXIAAAAA <cxia0209@gmail.com>
Signed-off-by: CXIAAAAA <cxia0209@gmail.com>
Signed-off-by: CXIAAAAA <cxia0209@gmail.com>
Signed-off-by: CXIAAAAA <cxia0209@gmail.com>
93aa6d3
to
a6e95c7
Compare
@ywang96 look like it is ready to merge |
The ngram result seems worse, possibly because you did not use --speculative-disable-by-batch-size? |
Signed-off-by: CXIAAAAA <cxia0209@gmail.com> Signed-off-by: xinyuxiao <xinyuxiao2024@gmail.com>
Signed-off-by: CXIAAAAA <cxia0209@gmail.com> Signed-off-by: Louis Ulmer <ulmerlouis@gmail.com>
dataset_class = VisionArenaDataset | ||
elif args.dataset_path == "likaixin/InstructCoder": | ||
dataset_class = InstructCoderDataset | ||
args.hf_split = "train" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
curious why was this hardcoded to train and not paramterized by args.hf_split as done in other cases? If we are eval a model then should it be tested on non train split?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
it is hardcoded for now, sure we could add it to args.hf_split.
For train/eval split, everyone's method of split might be slightly different, so this is just an example.
The default batch size is too large, normally we measure in low batch size, e.g. 1 or 32 etc. |
Signed-off-by: CXIAAAAA <cxia0209@gmail.com>
Signed-off-by: CXIAAAAA <cxia0209@gmail.com>
Signed-off-by: CXIAAAAA <cxia0209@gmail.com> Signed-off-by: Mu Huai <tianbowen.tbw@antgroup.com>
yup, high bs in this bench was the reason why it regressed. This PR has latest speedup: #18971 |
add likaixin/InstructCoder for speculative decoding benchmark throughput
to run instruct coder benchmark:
to run random benchmark:
baseline:
Throughput: 37.29 requests/s, 12226.80 total tokens/s, 7457.64 output tokens/s
ngram proposer: --speculative-model "[ngram]" --ngram_prompt_lookup_min 2 --ngram-prompt-lookup-max 5 --num_speculative_tokens 5
Throughput: 35.28 requests/s, 11569.61 total tokens/s, 7056.79 output tokens/s
benchmark_serving:
server:
VLLM_USE_V1=1 vllm serve meta-llama/Meta-Llama-3-8B-Instruct (--speculative-model "[ngram]" --ngram_prompt_lookup_min 2 --ngram-prompt-lookup-max 5 --num_speculative_tokens 5)
client:
python3 benchmarks/benchmark_serving.py --model meta-llama/Meta-Llama-3-8B-Instruct --dataset-name hf --dataset-path likaixin/InstructCoder --num-prompts 2048
baseline:

ngram proposer:
