Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Feature Request: Stagger Options: Max Concurrent Threads & Maximum Memory #100

Open
MinesofMoriaCrypto opened this issue Jun 7, 2021 · 2 comments

Comments

@MinesofMoriaCrypto
Copy link

I would like to be able to set a maximum for concurrent threads. I understand the Maximum Phase 1 sort of does this, as I could set X threads on the jobs, and then account for the threads not used, but it would be much easier if I could just set a thread max along with it then load up a bunch of jobs and trust it wasn't going to soak my threadcount.

I would also like to be able to set a maximum amount of memory allocated to all jobs concurrently. Same deal, rather than trying to land it right, I'd rather just HP know to not start another job if a max of memory was allocated to the current jobs.

Thanks for all your hard work.

@josemanuelhda
Copy link

AFAIK the plotting process uses the assigned memory all the time except en copying and first spread of the plot that las a few minutes, the rest of the time it makes tables in the ram where it does its calculations and saves it to disk, so far it is no use to make hp decide for you, calculate the total amount of memory you want to assign for plotting, divide it but the maximum number of concurrent processes you can run, and assigh that memory to every job.

Nevertheless you'd be surprised to see how little if any difference makes to assign more than the max recomended amount, of RAM and on the other side, assigning low memory does not penalize so much, in my system a 1300 RAM plot takes only 25% more time to finish in the same drive and number of threads than using 3200 RAM, so if you have many drives and are short on memory, plotting with 1 or 2 threads and 1 to 1.5 RAM is a very productive option.

@MinesofMoriaCrypto
Copy link
Author

There is use of HP deciding to limit the object. Consider when I run 10 jobs that typically finish in 5 hours 45 minutes. With 4 threads max and 8000 memory, a phase 1 limit of 8, and a stagger of 20 minutes, attempting to overlap the jobs such that it lands with 16 threads finishing, leaving 12 available, hoping to land the timing relatively close. Such that 16 become occupied by the original phase 1 jobs, and 3 new phase 1 jobs, and then getting an overlap of 7 original jobs down to 1 core. Then 2 jobs into phase 1 with 4 cores, and stabilizing a cycle that is similar, one could get 10 jobs running under a high mem load. The inherent maximum for this particular system is 8 jobs with 64GB of RAM to use. This would mean that on occasion, I would prefer the 9th job in the load order to be staggered slightly longer than the usual estimate to avoid ending up using the page file.

This is just one scenario, but there are numerous complicated scenarios where both a maximum thread limit and a maximum memory commit would be useful. I'll be loading a new system running 24 threads and 64Gb of RAM. There are certainly points of negative return, however at the moment, I've found what I consider to be a medium place using 4 threads and 8000 memory to cut Phase 1 time drastically. With 8 more threads, I either need to cut the memory down to 3800, maybe even 2 cores per job. Either load could get towards the max available RAM, as with a 2 core set of offset jobs I would probably need to queue likely 20 jobs, which if they all were running at once would be 10GB into the page file with 3800 max.

In either case, both global limits would be helpful.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants