-
Notifications
You must be signed in to change notification settings - Fork 441
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
for _constraints: memory requirements vary with available # or cores #1953
Comments
Inability to specify memory per core is causing build failures with g++, especially in packages that have heavy utilization of templates. And simply adding higher memory requirements tends to drive these workloads onto machines with more cores which doesn't solve the problem. |
Duplicate of #4433 |
as said in the other report, I don't want to make the constraint system over complicate, since it is already a bottleneck and the next level of problem would be "I just have one compile job which needs 6GB for this generated gsoap file , the others just need 500MB and I need to configure that...". The only sane way is to decide this per source in the spec file. OBS can not know anything about the need... |
On Wed, 7 Mar 2018, Adrian Schröter wrote:
as said in the other report, I don't want to make the constraint system over complicate, since it is already a bottleneck and the next level of problem would be "I just have one compile job which needs 6GB for this generated gsoap file , the others just need 500MB and I need to configure that...".
The only sane way is to decide this per source in the spec file. OBS can not know anything about the need...
Which is why we're trying to tell it ...
Why not remove _constraints completely?
|
I would like to refresh discussion about this.
So a machine with 16 threads is used with 6 only. That's very bad. |
You can't really do anything from the _constraints side since I don't believe OBS sorts, but just filters the list anyway. So you can just request more memory if you want more cores active. |
Just for sake of completeness here, output from |
I would like to give this issue a new refresh and a deeper analysis. Few weeks ago I had a brief discussion with @adrianschroeter who believes that most of the package builds are dominated by a single process that consumes most of the memory resources. In the beginning, I would like to notice that So first we see a parallel compilation ( Comments about memory usage:
So for such a package, I would like to express constraints in the following manner:
That should explain why both of these are important. Finally, I'm attaching a graph for Firefox where I don't separate individual I hope it can convince OBS guys that memory per core is something really useful from a packaging perspective. |
hm, so you have a little bit more memory usage during build, but still the single large one linking at the end. So, when we add per-core memory limit in the constraints it would need to fullfill both, the per core and the total given memory requirement, right? |
Exactly! Both are useful. |
Just for you #10008 |
Great! Thank you for it !!! |
There is one downside with this approach. While previously, packages could build on any worker with the minimum of RAM available by just running less jobs in parallel, they lost the ability to scale to such smaller workers, but instead clog up the queue for the bigger ones. This is especially annoying for ARM, where there aren't many powerful hosts. |
examples of packages that are failing with usage of memoryperjob while not before with %limit_build macro in spec for ppc64: |
Seems like OBS gets the SMP calculation wrong for PPC64(LE). PPC64 is the only one where both cores and treads are specified as QEMU smp parameters, all other archs just use a single core count value. |
The _constraints system is quite powerful, but every so often I lack a possibility to request:
The rationale is rather simple: having a package that does parallel building will use more memory, the more CPU cores it receives.
Practical example, openSUSE:Factory/libtorrent-rasterbar
It requires 3.5G when building with make -j4
It required 1.5GB when building with make -j2
It required ~ 800M when buildnig with make -j1
the .spec file builds using make %{?_smp_mflags} - so the number of workers is injected by the worker / build script.
In this case if would be great to have a way to specify: I need 800M / core
The build script could then dynamically lower the number of parallel build jobs, based on the available RAM
The text was updated successfully, but these errors were encountered: