-
-
Notifications
You must be signed in to change notification settings - Fork 249
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Create a jdk21u s390x Linux DevKit toolchain #3700
Comments
As part of this I'm trying to replicate the existing aarch64 centos7 devkit from @andrew-m-leonard 's work from adoptium/ci-jenkins-pipelines#955 as well as trying on Fedora so I'll log the findings.
[1] - Uses gcc 4.8.5 NOTE 1: It should be possible to download the prerequisite packages on RHEL7 using NOTE 2: Fedora 21 was the first one that has repositories available for aarch64 so you cannot build an earlier devkit on there. I have successfully built one on aarch64 based on Fedora 21 but that's no use to us. s390x was available before that so should be possible to build a RHEL7-compatible one on there. NOTE 3: Between F27 and F28 the aarch64 port was moved out of the NOTE 4: Packages that may or may not be required but were in my RHEL7 test system: NOTE 5: Attempting to build gcc 11.3 on Fedora 39 (outside the devkit) fails with NOTE 6: The platform detection on Fedora doesn't always seem to work and needed |
The important ones for the purposes if this issue are the |
Summary of s390x devkits (For reference on the dockerhost machine it takes about 45 minutes to build a RH7 devkit, and around 10 minutes to build the JDK afterwards):
|
excellent @sxa |
@sxa i'm having issues with the F19 devkit, it seems some of the executables are picking up GLIBC_2.33 ?
|
Yeah it hasn't worked as expected - there are still some dependencies coming from the host system for that one unfortuantely. |
It looks like I'd built the one I gave you was probably the one built with GCC11 on a Fedora34 host when I republished without SDT and so had some extra dependencies that it shouldn't have had. Now rebuilding with the RH7 system gcc (4.8.5) which should alleviate those errors with the binutils packages. @andrew-m-leonard There is a new version of in the same directory as the previous one as |
A bit more experimentation - the reason for the Other things being tried. Note that all devkits referenced here are built without systemtap-sdt:
[1] - Requires glibc2.27 so will not run on a RHEL7 system Unfortunately (presumably due to things like glibc being at different patch levels, and it looks like Fedora 19 came with gcc 4.8.1 instead of the 4.8.5 in later CentOS7 releases (For reference CentOS 7.0.1406 had gcc 4.8.2) the builds using a Fedora19 devkit are not binary identical to those build using a RHEL7 devkit. The only practical options for the devkit are to build and test a devkit on the host the devkit was built on, which should be RHEL7 in the absence of having a Fedora 19 system. The devkit can be from RHEL7 or Fedora 19. And with that, I'm done :-) |
Additional tests building openjdk 21 with the built devkits:
[1] - Identical here means that there are no binary differences in anything within the final JDK tarball when comparing with the initial build from the first row in the table. Conclusion: To build a reproducible JDK you need to use the same devkit - you cannot build a devkit on one host system and expect the results from it to be identical to an "equivalent" devkit built in another environment. Based on this, the preferred option in order to meet the goal of using a devkit to produce something that works on RHEL7 is is to either use a RHEL7 devkit, or a Fedora19 devkit built on a well-defined fixed base OS. However, once we have the devkits produced we are not tied to running them on a RHEL7 system. |
Next steps:
Ideally the last of those will be including it into the build image as a replacement for the GCC11 that we currently install into |
Some notes from when I was experimenting in case they're useful to others:
|
s390x devkit creation jobs (Based on @andrew-m-leonard's branch with some prototype modifications from https://github.com/sxa/ci-jenkins-pipelines/commits/devkit_s390x_rhel:
Neither of these are currently running in a docker container (unlike on the other platforms). There will need to be extra work to allow that to happen, including switching the docker software used on the host back to the default docker from the RHEL repositories, and also having a way of making the RHEL7 packages accessible - the downloads can generally only be done as |
Prototyping this on build-marist-rhel79-s390x-2 which has had the following changes applied:
Packages changed from switching from `docker-ce` to `docker`
|
Summary at end of Tuesday 25th:
Note 1: I had this machine running two executors and performing the docker image build, and a build job in parallel. This caused the machine to fail both jobs and have "unable to fork" message in the agent which were not immediately fixable. I have rebooted the machine and it has reconnected successfully. Note 2: The Red Hat supplied docker package installs itself as a service, but does NOT automatically start it (either on install, or on reboot) Note 3: I have not updated the playbooks in my infrastructure PR to switch over from docker-ce to docker. |
NOTE: The issue in this comment have been resolved, but I'm leaving this here for historic reference
OK That's causing problems. build jobs and the dockerbuild image rebuid job are having issues with Options:
Changes on RHEL8 machine
The issue with resource temporarily unavailable seems to have been a jenkins issue with the agent. When I duplicated the node definition for |
OK the "fix" mentioned in the previous comment didn't work. Despite the rename fixing the jenkins connection issues, the machine ( [*] - |
I tend to get fork resource errors on my local aarch64 VM, and I either have to reboot it, or build with less "jobs", eg.--with-jobs=4 |
Time for another table I think...
[*] The old ROSI (RHEL subscription) parameters are the ones that are planned for removal as part of https://github.com/adoptium/infrastructure/pull/3492/files#diff-80de47d21d528cc9398601b8acc0578d6415e61ca5b3aa94f8dc9c8f645c5adb which can be done as long as the host has a subscription and is using the RHEL supplied docker or podman. Up to now the build images have been rebuilt manually on the build machine with the ROSI credentials being explicitly supplied to allow it to run the playbooks. TL;DR of the current blockers. Ideally we want one type of system that perform all three actions. The gotchas are:
PROPOSAL: Assuming RHEL7+docker cannot be made to work I think my preferred option would be to start doing the s390x builds on a RHEL8 host and deal with the changes required to achieve that. This would give us an experience comparable with the other platforms, where we are creating the devkit inside a container. The packages required for the devkit could be downloaded during the docker images creation, ready for use in the devkit build which could be done in the dockerfile or in the ansible playbooks in an Related: adoptium/infrastructure#3217 (A place to store the images to assist distribution to build machines. We could store them as artifacts on the rhel7 job but they are large!) |
For test-rhel8-2 podman : Try changing this line https://github.com/adoptium/ci-jenkins-pipelines/blob/e297546378b5fbdb676223eb1ae2a0abe7406679/pipelines/build/common/openjdk_build_pipeline.groovy#L2049
|
Good shout - I hadn't realised that we had some special logic in there to handle this. It took a bit more effort since podman seems to take quite a while to start up when using extra options like that and therefore hits the 180 second timeout for the docker launch:
I got past that by starting up a container with the same options manually, waiting 5-10 minutes for it to run (it feels like it's duplicating the image since it chews up space during that time) after which the images start near-instantly. It will need to be seen whether this causes a problem when the images is rebuilt. To use RHEL8+podman as per the above proposal we need to either:
I'm feeling that the reprovisioning option in 3 is good, subject to us being ok with only having one machine able to run the old stuff in the interim (Although the second one has been offline for a while anyway) Other random podman stuff discovered todayFor bind mounts to work on my desktop system I need to add If SELinux is enabled (as it is by default on the Marist RHEL8 systems) then the above will not work. However as per Andrew's comments |
Suggestion from Severin to use |
Ref this - it seems to take about 5 minutes on the machine to complete the operation. It's not immediately obvious where that timeout is set (The 3 minutes seems to be from this PR and can be overridden by EDIT: I've mitigated this by putting a
|
I was hitting a failure due to the RHEL7 devkit not being correctly extracted from the host. In this case when you point
It will then fail to compile and towards the end show messages about untracked files, which have nothing to do with the underlying issue but did confuse me for a while:
A "good" build with devkit will look something like this:
and here is the equivalent with the RHEL7 devkit:
|
Note that I have generated a new devkit tarball on the RHEL8 machine with the devkit.info file modified to have the expected line based on the
|
I'd probably have left this in the final iteration of 1Q since that's where the work was done and only a few cleanups remained for this week but 🤷🏻 |
Parent: #3468
Similar to the aarch64 issue at #3519 this will cover the analysis required to create a devkit for JDK21+ on the Linux/s390x platform. Two options will be explored in parallel:
The text was updated successfully, but these errors were encountered: