-
Notifications
You must be signed in to change notification settings - Fork 143
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Latency increase when run on multi-GPU #116
Comments
Hey @prd-tuong-nguyen, what kind of networking do you have between these GPUs? If they're using PCIe, the frequent communication between devices will likely cause performance to degrade. To get good performance on multiple GPUs, you typically need NVLink. My recommendation to try to use a single GPU if possible, and only use multi-GPU if you have to due to memory constraints (for example, serving a 70b param model with 40GB of VRAM in fp16). |
@tgaddair Thanks for your reply, |
Hey @prd-tuong-nguyen, in your case I would recommend using data parallelism rather than model parallelism. Specifically, I would run one replica per GPU then put a load balancer in front of them using something like Kubernetes. As for the best load balancing strategy, if you do not have enough load to keep the GPUs fully utilized, or the number of adapters you're using is relatively low (<25 or so), then I would suggest using a round robin load balancing strategy. That will keep the replicas equally busy, which should help keep latency low. If, however, you're operating at very high scale, I would suggest using a load balancer with a consistent hashing policy based on the adapter ID, so that you can more efficiently batch together requests for same adapter and maximize throughput. |
@tgaddair Thank you for your suggestion, I will try it <3 |
Hi @prd-tuong-nguyen do you have some performance benchmark on running it with multiple GPUs setup in terms of though put? |
System Info
I run your docker image in 2 cases:
--sharded false
)--sharded false --num_shard 4
)=> When I run single-gpu, the total time around 1.5 second and take ~21GG GPU, but when I run on multi-GPU, it take ~2.4second and 19GB/1GPU :( Seem the lower performance when run multi-gpu.
Do you meet this problem?
Information
Tasks
Reproduction
Run dokcer with
--sharded true --num_shard 4
Expected behavior
Same or better performace when run multi-gpu
The text was updated successfully, but these errors were encountered: