-
-
Notifications
You must be signed in to change notification settings - Fork 1.9k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
using nvidia gpu. #1695
Comments
Hi @HyeyeonKoo thanks for your interest in OpenFaaS You will need to fill out the whole issue template and ideally provide us with a code example too. https://raw.githubusercontent.com/openfaas/faas/master/.github/ISSUE_TEMPLATE.md Let me know if you are going to do that or whether you want to close the issue. If this is for your employer, feel free to reach out about commercial support. Alex |
Thank you for the answer. I try to fill the form again. My actions before raising this issue
I found an issue that similar to the one I was looking for : #639 Expected BehaviourI want to use GPU in OpenFaaS Funtion with pytorch. Current BehaviourGPU is not available using pytorch in OpenFaaS function. Are you a GitHub Sponsor (Yes/No?)Check at: https://github.com/sponsors/openfaas
List All Possible Solutions and WorkaroundsIs there a chance to make the GPU available with changing function's DockerFile or YAML? Which Solution Do You Recommend?Steps to Reproduce (for bugs)
ContextI try to service deep learning model inference like sentiment analysis. Your Environment
|
@HyeyeonKoo at least one approach to this is partially described here https://docs.openfaas.com/reference/profiles/#use-tolerations-and-affinity-to-separate-workloads Using the profiles feature you can ensure that the function is running on a node with a GPU. It might be possible with constraints but I think the profiles approach is more accurate |
Thank you for answering. first, when I create a profile with YAML, error occurred.
So, I apply it with
Can you give me some more advice with this problem, please? |
It looks like a validation error, without seeing that profile yaml you used it is pretty hard to say for sure. My guess from rereading the docs, is that it is a yaml indentation error The docs have this (which is wrong) kind: Profile
apiVersion: openfaas.com/v1
metadata:
name: withgpu
namespace: openfaas
spec:
tolerations:
- key: "gpu"
operator: "Exists"
effect: "NoSchedule"
affinity:
nodeAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
nodeSelectorTerms:
- matchExpressions:
- key: gpu
operator: In
values:
- installed But it should probably be this kind: Profile
apiVersion: openfaas.com/v1
metadata:
name: withgpu
namespace: openfaas
spec:
tolerations:
- key: "gpu"
operator: "Exists"
effect: "NoSchedule"
affinity:
nodeAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
nodeSelectorTerms:
- matchExpressions:
- key: gpu
operator: In
values:
- installed It would really help to know the output from |
Thank you for an advice. First of all, here is the
And, I install OpenFaaS with this guide which is using helm. I checked this again.
On Minikube, also it works well
I create profile with your advice, so the validation problem is gone. So I try to build, push, deploy function again. I keep try to find a way. If you think any idea, please tell me. |
If I am reading this correctly, then it seems like we need to set a Resource request on the function https://kubernetes.io/docs/tasks/manage-gpus/scheduling-gpus/ But this isn't exposed via the OpenFaaS API right now. We might need to sync with @alexellis on this, but he is on vacation until next week. I see two options
|
Thank you, very much. |
Hello, Thank you for the nice work.
I am trying to use OpenFaaS with GPU. But it is not easy.
So, I will be very appreciate if you give me some advices.
Environments
faas-cli version
): 0.14.1docker version
(e.g. Docker 17.0.05 ): 20.10.12I install nvidia-docker from fallowing this link
and,
I make minikube recognize gpu in cluster from fallowing this link. So, cluster recognize it well.
and, also working well in nvidia/cuda container.
But, It didn't work in OpenFaaS funtion. How can I OpenFaaS Function recognize GPU. Do I need to edit docker file?
Please, help.
The text was updated successfully, but these errors were encountered: