Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Outdated benchmarks #381

Closed
OlivierDehaene opened this issue Jul 6, 2023 · 7 comments
Closed

Outdated benchmarks #381

OlivierDehaene opened this issue Jul 6, 2023 · 7 comments
Assignees

Comments

@OlivierDehaene
Copy link

Hello!

Paged Attention was added to text-generation-inference in 0.9 and all the benchmarks you display on the your README are now out dated.

Any chance you could update them?
Cheers

@OlivierDehaene
Copy link
Author

For example, running your throughput benchmark on Llama-7B on a single A10, we get a throughput of 112 req/min with tgi.

docker run --gpus all -p 3000:80 -v /data:/data ghcr.io/huggingface/text-generation-inference:0.9.1 --model-id /data/llama-7b --num-shard 1 --max-batch-total-tokens 17664 --max-batch-prefill-tokens 2048 --max-waiting-tokens 0 

@zhuohan123
Copy link
Member

@OlivierDehaene Thanks for your support on PagedAttention! We will test the performance of the latest TGI and update the figure accordingly.

@njhill
Copy link
Member

njhill commented Jul 6, 2023

It would be awesome to include some latency numbers too in addition to just throughput!

@LiuXiaoxuanPKU
Copy link
Collaborator

For example, running your throughput benchmark on Llama-7B on a single A10, we get a throughput of 112 req/min with tgi.

docker run --gpus all -p 3000:80 -v /data:/data ghcr.io/huggingface/text-generation-inference:0.9.1 --model-id /data/llama-7b --num-shard 1 --max-batch-total-tokens 17664 --max-batch-prefill-tokens 2048 --max-waiting-tokens 0 

Yeah, the benchmark will be very interesting and useful! One side question, how do you get the --max-batch-total-tokens 17664 in TGI?

@LiuXiaoxuanPKU LiuXiaoxuanPKU self-assigned this Jul 7, 2023
@OlivierDehaene
Copy link
Author

how do you get the --max-batch-total-tokens 17664 in TGI?

What do you mean?

@zhuohan123
Copy link
Member

--max-batch-total-tokens 17664

I guess the question is how did you determine this specific limit for max batch total tokens?

@hmellor
Copy link
Collaborator

hmellor commented Mar 6, 2024

Closing because README no longer contains benchmark results.

@hmellor hmellor closed this as completed Mar 6, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

5 participants