-
-
Notifications
You must be signed in to change notification settings - Fork 1.9k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Research: show GPU attached to a function #639
Comments
Per these docs https://kubernetes.io/docs/tasks/manage-gpus/scheduling-gpus/ There are two core changes
The simplest example of a pod using a GPU is provided as apiVersion: v1
kind: Pod
metadata:
name: cuda-vector-add
spec:
restartPolicy: OnFailure
containers:
- name: cuda-vector-add
# https://github.com/kubernetes/kubernetes/blob/v1.7.11/test/images/nvidia-cuda/Dockerfile
image: "k8s.gcr.io/cuda-vector-add:v0.1"
resources:
limits:
nvidia.com/gpu: 1 # requesting 1 GPU Note the
I think most of these changes will be made in the request struct and in the stack file schema https://github.com/openfaas/faas/blob/master/gateway/requests/requests.go#L47 and https://github.com/openfaas/faas-cli/blob/master/stack/schema.go#L50 Modifying the To support the mixed GPU case, we need to support allowing the developer to specify a
This would be adding a new option to the http requests and the stack schema. |
We already cover the node selector via stack constraints. PS. I think this issue should be moved to faas-netes since it's Kubernetes specific. |
I would like project research and initiatives to start out here in the FaaS repo for visibility. Thanks for the comments Lucas. |
That project looks like a useful example.
|
@dkozlov we had some discussion about this on Slack.. please can you summarize the points for the community? |
Sorry for late response,
Yes, it "just work" after installing nvidia-docker
Yes, I can
It depends on how you utilize your GPU, but in most cases neural networks on GPU is demonstrably faster than CPU |
I'm confused by this comment - I thought we were talking about scheduling constraints on Slack because two Pods cannot use the same GPU at the same time? |
I have found following problems with native Schedule GPUs:
As workaround I have implemented following:
|
If you install only NVIDIA drivers, docker and nvidia-docker is enough to start GPU docker containers by kubernetes without any device plugin. Also I have found two outdated guides for openshift which not support overcommitting of GPUs: Some useful information from ClarifAI: |
My question was: "could you run two different functions using [the same] GPU at the same time?" (expecting an answer of no) and you answered "Yes, I can". Are we talking about the same thing? I thought GPUs could only be used by a single container/Pod at a time? |
I could repeat it again: "Yes, it possible" :). It was even possible in 2016, see ClarifAI blog post |
@alexellis According to the issue kubernetes/kubernetes#52757
So @flx42 means that it is possible to share NVIDIA device between multiple containers but only in concurrency mode by original NVIDIA design. https://docs.nvidia.com/deploy/pdf/CUDA_Multi_Process_Service_Overview.pdf
So by default multiple processes can use the GPU simultaneously even without using "Multi-Process Service" |
You are both correct :)
This is correct in the scope of Kubernetes, GPU resources are integer values and will belong to a single container. Unless you try to hack around it, that is :)
This is also correct. If you launch containers manually on your machine, you can launch 10 containers accessing the same GPU, no problem. You can also launch 10 processes outside containers, it's not different. Let's not even talk about Multi Process Service (MPS) for now, you probably want to start with just the upstream GPU support in K8s. |
@flx42 Which another tricks/hacks we could do for overcommitting of GPUs (using single GPU by multiple pods) in scope of Kubernetes? I am asking because OpenFaaS scales by pods and it could not be scaled by containers in a single pod. |
I don't think you should try to hack around the official upstream support: that means don't overcommit GPUs. If you need to run multiple pods for the same function, you will need multiple GPUs. |
FYI: https://github.com/Microsoft/KubeGPU seems that Microsoft trying to solve this problem |
@flx42 thanks for your input 👍 I would like to figure out what we need to do in the project to make it easy to consume GPU in a function on GKE or a bare-metal node / VM with nvidia-docker swapped in. If you'd like to collaborate on this we are also talking on Slack. |
I think you should embrace the current upstream support, including its limitations. For the sake of simplicity and to avoid falling into suboptimal scheduling corner cases, I think you should limit the initial implementation to 1 GPU per container. i.e. |
Would either of you be interested in helping to implement that within the project? |
@flx42 GPUs in the cloud are very heavy-weight and expensive. What could I buy to use at home for testing this work and ensuring the GPU support is stable? Do you or @dkozlov have a good container or some sample code that can verify that it has used or is using a GPU? That would be ideal for our testing and proving that things are working end to end. |
I'm working on a patch that will enable scheduling functions in k8s if there is an extended resource exposed, by let's say a suitable device plugin, such as [this]. The work includes changes to faas-netes and faas-cli, and a minor one to the FunctionResources struct that is from faas. Naturally faas-cli and faas patches will not be k8s specific. |
@alexellis https://hub.docker.com/r/tensorflow/tensorflow/
Open https://developer.nvidia.com/cuda-gpus -> CUDA-Enabled GeForce Products -> Select any GPU by Compute Capability >= 6.1 |
Derek add label: Hacktoberfest |
Will this work on hosts with >1 GPU? I have a computer with two GTX 1080 TI's that I use for training or bulk inference. NVIDIA allows you to peg a docker container to a single GPU via an environment variable. |
@sberryman yes, our device plugin implementation supports multiple GPUs on one node and set this environment variable accordingly for the container. |
1、gpu device plugin support proposal with k8s |
Bump |
Bringing this to @DieterReuter attention, with the Jetson Nano as a target device for experimentation. |
@johnmccabe rebuilt his kennel to use the GPU in Docker. |
|
Hello, is it still a subject going ? Thank's for the answers |
@aimbot31 I think the best wait to achieve this is through the new Profiles feature in faas-netes https://docs.openfaas.com/reference/profiles/#use-tolerations-and-affinity-to-separate-workloads Using profiles exposes the ability to give node affinity to your functions so that they run on nodes with a GPU available |
@LucasRoesler I am not able to get Profiles method to work on GKE. It allows me to run my pod on GPU node fine, but apparently GKE doesn't attach and make GPU available to a pod unless explicitly requested via limits. Since its a managed Kubernetes, I cannot configure default runtime to nvidia-docker on these nodes. Any suggestions? |
@guptaprakash9 unfortunately, i don't have a good answer. I can't think of a good way to set that request in OpenFaaS right now @alexellis we either need to expand the spec for requests or add a custom profiler field to configure this the relevant GKE docs https://cloud.google.com/kubernetes-engine/docs/how-to/gpus#pods_gpus |
@guptaprakash9 if you or anyone landing here still has interest, we would be open to having this feature a) sponsored b) contributed or just c) written up in detail as a proposal |
Hello everyone, I would like to implement a component for OpenFaaS which will help to accelerate functions with GPU and TPU. This will benefit in scientific applications, video frame analysis tasks etc. Any ideas/suggestions on what kind of a component will be best fitting to this project? |
Found something interesting @alexellis. |
Thanks for the link @rajitha1998 You could try starting with faasd which is fairly small and hackable, you can configure containerd to use a local GPU or TPU that you have. It's on my list and it's not for lack of interest, but since we are way off the funding target on GitHub Sponsors, and I have no access to such equipment, it's up to the community to invest their own time and resources into this. |
I am currently a bit busy with my university work. But I will add anything here which helps. Will look into this more when I get time :) @alexellis. Here there is a way to create a local Kubernetes cluster with GPU with MiniKube but a Linux computer is required. Using the ‘none’ driver seems to be the easy way: https://minikube.sigs.k8s.io/docs/tutorials/nvidia_gpu/ Having a low-level GPU such as Nvidia 940MX is enough: https://developer.nvidia.com/cuda-gpus |
Description
Show a GPU attached to an OpenFaaS function in Kubernetes
Background
We have several users using Python for data-science where GPU acceleration is available. From the investigation I've done so far we should be able to make a few minor changes to faas-netes and then be able to mount a GPU into a function.
Tasks
Other notes
GKE has GPUs available pre-configured under Kubernetes - I think this would be the easiest way to test - https://thenewstack.io/getting-started-with-gpus-in-google-kubernetes-engine/
Otherwise you'll need an Nvidia GPU and the process for configuring your
kubelet
is not trivialDocumentation page from Kubernetes:
https://kubernetes.io/docs/tasks/manage-gpus/scheduling-gpus/
The text was updated successfully, but these errors were encountered: