Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Bump version to v0.4.3 #5046

Merged
merged 1 commit into from
May 30, 2024
Merged

Conversation

simon-mo
Copy link
Collaborator

@simon-mo simon-mo commented May 25, 2024

Result from @KuntaiDu
https://kuntai.notion.site/vLLM-benchmark-455e830910034635839e356bcb14604b?pvs=4

Offline Request throughput (req/s)/ Token throughput (tok/s) Online Request throughput (req/s)/ Input token throughput (tok/s)/ Output token throughput (tok/s)
1GPU, Llama 7B, v0.4.2 14.33/6895.75 12.29/3024.07/2396.00
4GPU, Llama 70B, v0.4.2 8.05/3871.86 7.20/1770.12/1460.62
1GPU, Llama 7B, v0.4.3 15.62/7516.03 13.18/3241.44/2566.47
4GPU, Llama 70B, v0.4.3 10.15/4884.14 8.62/2120.21/1749.05

@simon-mo simon-mo merged commit 87a658c into vllm-project:main May 30, 2024
60 of 63 checks passed
blinkbear pushed a commit to blinkbear/vllm that referenced this pull request May 31, 2024
dtrifiro pushed a commit to opendatahub-io/vllm that referenced this pull request May 31, 2024
robertgshaw2-neuralmagic pushed a commit to neuralmagic/nm-vllm that referenced this pull request Jun 8, 2024
joerunde pushed a commit to joerunde/vllm that referenced this pull request Jun 17, 2024
robertgshaw2-neuralmagic pushed a commit to neuralmagic/nm-vllm that referenced this pull request Jul 14, 2024
Temirulan pushed a commit to Temirulan/vllm-whisper that referenced this pull request Sep 6, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants