-
Notifications
You must be signed in to change notification settings - Fork 1.6k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Support for heterogeneous concurrency limits #7834
Comments
i justed noticed that you added https://docs.prefect.io/concepts/tasks/?h=conc#task-run-concurrency-limits If we could speficy task concurrency limits per agent as well, the issue would be resolved. Might also be quite nicely alinged with how queue concurrency can be limited per queue or per agent.
|
@madkinsz just tagging you to this doesn't get lost |
This should be addressed fully by the "Work pool" concept that is currently experimental. |
@madkinsz so Work pool just dropped - however i cannot see how i am supposed to use work pools to achive the desired results. The only option i see it is to keep doing it as we are currently doing it: Each node has two agents running
This has the big shortcoming that the whole flow running on the GPU machine will have to run with a concurrency limit of 1. And that on a machine that might have as many as 240 CPU threads and 2 TB of RAM. The training itself uses the cores - but that the flow cannot is weird to say the least. |
First check
Prefect Version
2.x
Describe the current behavior
Currently i can only set concurrency limits flatly per agent or queue.
Describe the proposed behavior
I want to specify that the a machine can support one flow that requires a GPU, but many flows that do not require a GPU.
Example Use
The typical use case for this is a GPU node.
Lets take the following example:
->
Preprocessing
->Training
->Postprocessing
Preprocessing and Postprocessing would launch many concurrent flows (on the same machine). While Training might only have one flow that runs on the run on the GPU. It must be ensured that no more than one GPU flows run at the same time. Otherwise there will be resources conflicts.
So the CPU only concurrency limit is say 10 and the one for GPU enabled tasks would be 1.
Additional context
No response
The text was updated successfully, but these errors were encountered: