-
Notifications
You must be signed in to change notification settings - Fork 55
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
chore(Jenkinsfile) introduce Maven client side caching #4669
base: master
Are you sure you want to change the base?
Conversation
Signed-off-by: Damien Duportal <damien.duportal@gmail.com>
First tentative: the archive restoration time takes between 5 to 13 min during the parallel stages (while it takes ~1min on the prep stage). It does not scale well in the current setup. Trying a 2 step process instead of a direct tar read: 1. copy archive from S3 share to local empty dir 2. uncompress archive from local empty to local empty. The goal is to identify what is not scaling well: local node filesystem (so step 2. would be slow) or S3 accesses (step 1 is slow). |
Signed-off-by: Damien Duportal <damien.duportal@gmail.com>
Second retry: the copy from S3 to local system is unsustainable. Gotta try with the aws s3 command instead, to rule out (or in) the S3 mount driver. Copy from the mountpoint (S3 CSI driver) to local filesystem (emptydir on a local nvme) of all the parallel pct stages takes > 10 min. Canceled the build |
|
With 1.2 Gb archive: jenkins-infra/helpdesk#4525 (comment) It's hard to conclude because
|
This PR is a first "real life" test of the BOM builds using the S3 PVC-based Maven client side caching.
It uses the ~6Gb archive (which is partial cache created in #4667) as a first tentative(s?).
The goal is to evaluate it does not slow down the BOM as we want this cache at least as a protection layer to avoid breaking bom build when ACP start receiving HTTP/500 from artifactory.
Ref. jenkins-infra/helpdesk#4525
Testing done
Submitter checklist