-
Notifications
You must be signed in to change notification settings - Fork 608
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Use CPU instances with GPU inference accelerator #618
Comments
+1. This is critical for a cost-effective deployment. |
Hi, I'd like to look into this issue if anyone can help me get started. |
@lezwon thanks for your interest! I think the first step is to figure out how to create an eks cluster with instances that have elastic inference attached. Currently, Cortex uses eksctl to create the cluster, and based on eksctl-io/eksctl#643, it looks like eksctl might not support elastic inference yet. But I am not sure if that's the case, or if there is a workaround; it could be worth reaching out to the eksctl team to inquire. @RobertLucian or @vishalbollu, do you have any additional context on this? |
@deliahu Thank you for the help. I'll look into the issue you mentioned with eksctl. :) |
@lezwon sounds good, thank you, keep us posted! |
This issue has been depriorized and the relevant eksctl issue is closed for inactivity but using EI would be cost saving for most of the Cortex users. Is there any plan to solve this issue in the following releases? |
@H4dr1en we have added multi instance types clusters as a feature recently. This can mitigate costs already, by allowing to run both CPU / GPU and Spot instances in the same cluster. I know it is not remotely the same as Elastic Inference, but it is an improvement :) We will look into Elastic Inference again soon since we are re-focusing the team's efforts on improving the Cortex UX on AWS. |
Description
Instead of spinning up a GPU nodegroup, spin up a CPU nodegroup with Elastic Inference (GPU accelerated inference).
Additional Context
The text was updated successfully, but these errors were encountered: