-
-
Notifications
You must be signed in to change notification settings - Fork 5k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[Bug]: Qwen1.5-72B L20x8 latest vLLM TPOT slower than v0.4.0.post, 48ms vs 39ms, why? #4852
Comments
command: export CUDA_VISIBLE_DEVICES=0,1,2,3,4,5,6,7
python3 -m vllm.entrypoints.openai.api_server \
--model Qwen1.5-72B-Chat \
--tensor-parallel-size 8 \
--max-model-len 8192 \
--trust-remote-code \
--disable-custom-all-reduce \
--enable-prefix-caching \
--tokenizer-mode slow \
--enforce-eager \
--gpu-memory-utilization 0.9 \
--port 8861 |
If you can biset to find out the commit that leads to the degradation, that would be helpful. Otherwise, it is very difficult to answer a generic report of a performance regression. |
vLLM seems to have been under reconstruction recently. I'm not quite sure what's causing TPOT to be slow |
most commits should be runnable since they pass the ci tests. it's not related with the refactoring. just bisect to find out the potentially commit that leads to this regression. |
I have test on L20, not sure the device is same as in CI. |
with cuda graph: 35.7ms -> 36.7ms, without cuda graph: 39ms -> 45ms |
maybe relate to #5207 |
feel free to test if #5207 solves your problem here. |
@youkaichao close, seems the latest vllm (up to #5410) has fixed this problem. (TP0T 45ms v0.4.2 -> 39ms v0.5, eager mode) [I][2024-06-11 16:31:36][ 1/20][ 1/20 5%] session:0 turn:0 req:0 output: 50 prompt: 1024(+[1024]) first(ms): 426.7(+[ 426.7]) other(ms): 39.3 latency(s): 2.35
[I][2024-06-11 16:31:46][ 2/20][ 2/20 10%] session:1 turn:0 req:1 output: 50 prompt: 1024(+[1024]) first(ms): 427.0(+[ 427.0]) other(ms): 40.6 latency(s): 2.42
[I][2024-06-11 16:31:56][ 3/20][ 3/20 15%] session:2 turn:0 req:2 output: 50 prompt: 1024(+[1024]) first(ms): 427.8(+[ 427.8]) other(ms): 39.2 latency(s): 2.35
[I][2024-06-11 16:32:06][ 4/20][ 4/20 20%] session:3 turn:0 req:3 output: 50 prompt: 1024(+[1024]) first(ms): 428.5(+[ 428.5]) other(ms): 39.2 latency(s): 2.35
[I][2024-06-11 16:32:16][ 5/20][ 5/20 25%] session:4 turn:0 req:4 output: 50 prompt: 1024(+[1024]) first(ms): 427.6(+[ 427.6]) other(ms): 39.2 latency(s): 2.35
[I][2024-06-11 16:32:26][ 6/20][ 6/20 30%] session:5 turn:0 req:5 output: 50 prompt: 1024(+[1024]) first(ms): 429.1(+[ 429.1]) other(ms): 39.0 latency(s): 2.34
[I][2024-06-11 16:32:36][ 7/20][ 7/20 35%] session:6 turn:0 req:6 output: 50 prompt: 1024(+[1024]) first(ms): 427.0(+[ 427.0]) other(ms): 40.5 latency(s): 2.41
[I][2024-06-11 16:32:46][ 8/20][ 8/20 40%] session:7 turn:0 req:7 output: 50 prompt: 1024(+[1024]) first(ms): 427.0(+[ 427.0]) other(ms): 39.1 latency(s): 2.34
[I][2024-06-11 16:32:56][ 9/20][ 9/20 45%] session:8 turn:0 req:8 output: 50 prompt: 1024(+[1024]) first(ms): 425.3(+[ 425.3]) other(ms): 38.8 latency(s): 2.33
[I][2024-06-11 16:33:06][10/20][10/20 50%] session:9 turn:0 req:9 output: 50 prompt: 1024(+[1024]) first(ms): 426.2(+[ 426.2]) other(ms): 39.1 latency(s): 2.34
[I][2024-06-11 16:33:16][11/20][11/20 55%] session:10 turn:0 req:10 output: 50 prompt: 1024(+[1024]) first(ms): 428.2(+[ 428.2]) other(ms): 38.9 latency(s): 2.33
[I][2024-06-11 16:33:26][12/20][12/20 60%] session:11 turn:0 req:11 output: 50 prompt: 1024(+[1024]) first(ms): 426.4(+[ 426.4]) other(ms): 39.0 latency(s): 2.34
[I][2024-06-11 16:33:36][13/20][13/20 65%] session:12 turn:0 req:12 output: 50 prompt: 1024(+[1024]) first(ms): 425.6(+[ 425.6]) other(ms): 39.6 latency(s): 2.37
[I][2024-06-11 16:33:46][14/20][14/20 70%] session:13 turn:0 req:13 output: 50 prompt: 1024(+[1024]) first(ms): 425.5(+[ 425.5]) other(ms): 39.2 latency(s): 2.35
[I][2024-06-11 16:33:56][15/20][15/20 75%] session:14 turn:0 req:14 output: 50 prompt: 1024(+[1024]) first(ms): 425.2(+[ 425.2]) other(ms): 39.9 latency(s): 2.38 |
Your current environment
🐛 Describe the bug
vLLM 0.4.0.post1:
vLLM 0.4.2 latest:
commit: #4794
The text was updated successfully, but these errors were encountered: