Openstack? #2742
Replies: 2 comments
-
Thank you for your response! We currently don't support OpenStack. Your contribution would be greatly appreciated. I'm not very familiar with OpenStack. Could you elaborate on how it will fit into our existing stack in your mind? Will it aid in the provisioning phase, or will it serve as another cloud provider, similar to what we implemented for k8s? |
Beta Was this translation helpful? Give feedback.
-
Thanks for getting back to me. I'm thinking that some research institutes use Openstack and will either have Nvidia GPU "slicing" licences or have direct pci-passthrough and will in future want to deploy models locally. Maybe using Skypilot to deploy to an Openstack cloud might be something people might have interest in. Standard practice at the moment is usually deployment via a GUI or via IaC such as Terraform. Openstack already has a python based SDK that uses can query the currently active servers, their information and also query things like machine flavours, available images etc. The only thing that might be limiting is that the engineers operating Openstack won't be able to select the GPUs like it seems in the cloud example. Their GPUs will be at a data centre somewhere so won't have the complexity of picking the GPU, they'll be restricted to whatever hardware they have based on their flavours. Unless an algorithm can drill into the flavours and make a decision based on the amount of VRAM. Same with the costing telemetry, I'm not sure how your current algorithm is done for that but it might be difficult to find a cost metric if it's just used internally. In my head the workflow goes like: Skypilot -> Openstack Auth -> Openstack API -> Provision Machines Not sure if this is useful! |
Beta Was this translation helpful? Give feedback.
-
Hi, I'm just wondering if Openstack is a supported provider for locally deployed LLMs?
If not it's something I'd like to make a start on.
Beta Was this translation helpful? Give feedback.
All reactions