-
Notifications
You must be signed in to change notification settings - Fork 75
CI: build images #8
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
I'd like to use /workspace/images for actual VM images. Signed-off-by: Ian Campbell <ijc@docker.com>
Hrm, |
Hrm, I buggered the syntax, so it errorred out but didn't say anything. Nifty. The webpage says:
That's something to watch for! |
e32e7df
to
3913eaf
Compare
Previously the CI was taking 3-3½ minutes. So far in my iterations here it looks to be taking more like 7-10 minutes with parallelism of 4. We are building 4 packages followed by 4 images, so more parallelism seems unlikely to help much. |
342d037
to
a29a843
Compare
If doing the build separately from pushing (as I am intending in linuxkit/kubernetes#8) it is desirable to avoid a second build when pushing. Signed-off-by: Ian Campbell <ijc@docker.com>
a240c8e
to
43f5079
Compare
Previous if you ran `make update-hashes` with a dirty tree then the `-dirty` suffix was sticky and would never be automatically removed (and might even multiply!). Also remove some unnecessary quotes. Signed-off-by: Ian Campbell <ijc@docker.com>
OK, I think this is about ready. I'll keep #9 open for now in case review requires retesting. |
Some times observed during testing. Package builds (in parallel up to 4):
I didn't do lots of measurements and there's some variance (mostly due to the network traffic I expect). Image building, up to 4 in parallel, each depends on the relevant package builds:
Again some variance. Push to hub takes ~2:00 if it pushes both If all 4 packages need building then the elapsed time was around 24:30, without anything needing to be built was ~8:10, I expect this will vary depending on the scheduling of the parallel jobs and which packages have actually changed. Without the parallelism I'd expect the full nop run to take ~20 mins and a full build more like ~30 mins so I think splitting into jobs like this is worthwhile. |
Previously (in linuxkit#8) building both seemed to timeout or otherwise fall foul of some sort of infra glitch. Try just building one for now. This is a first step in trying to actually boot images in CI. Signed-off-by: Ian Campbell <ijc@docker.com>
Previously (in linuxkit#8) building both seemed to timeout or otherwise fall foul of some sort of infra glitch. Try just building one for now. This is a first step in trying to actually boot images in CI. Signed-off-by: Ian Campbell <ijc@docker.com>
Previously (in linuxkit#8) building both seemed to timeout or otherwise fall foul of some sort of infra glitch. Try just building one for now. This is a first step in trying to actually boot images in CI. Signed-off-by: Ian Campbell <ijc@docker.com>
Previously (in linuxkit#8) building both seemed to timeout or otherwise fall foul of some sort of infra glitch. Try just building one for now. This is a first step in trying to actually boot images in CI. Signed-off-by: Ian Campbell <ijc@docker.com>
Previously (in linuxkit#8) building both seemed to timeout or otherwise fall foul of some sort of infra glitch. Try just building one for now. This is a first step in trying to actually boot images in CI. Signed-off-by: Ian Campbell <ijc@docker.com>
Previously (in linuxkit#8) building both seemed to timeout or otherwise fall foul of some sort of infra glitch. Try just building one for now. This is a first step in trying to actually boot images in CI. Signed-off-by: Ian Campbell <ijc@docker.com>
Previously (in linuxkit#8) building both seemed to timeout or otherwise fall foul of some sort of infra glitch. Try just building one for now. This is a first step in trying to actually boot images in CI. Signed-off-by: Ian Campbell <ijc@docker.com>
Previously (in linuxkit#8) building both seemed to timeout or otherwise fall foul of some sort of infra glitch. Try just building one for now. This is a first step in trying to actually boot images in CI. Signed-off-by: Ian Campbell <ijc@docker.com>
Previously (in linuxkit#8) building both seemed to timeout or otherwise fall foul of some sort of infra glitch. Try just building one for now. This is a first step in trying to actually boot images in CI. Signed-off-by: Ian Campbell <ijc@docker.com>
Previously (in linuxkit#8) building both seemed to timeout or otherwise fall foul of some sort of infra glitch. Try just building one for now. This is a first step in trying to actually boot images in CI. Signed-off-by: Ian Campbell <ijc@docker.com>
Previously (in linuxkit#8) building both seemed to timeout or otherwise fall foul of some sort of infra glitch. Try just building one for now. This is a first step in trying to actually boot images in CI. Signed-off-by: Ian Campbell <ijc@docker.com>
Previously (in linuxkit#8) building both seemed to timeout or otherwise fall foul of some sort of infra glitch. Try just building one for now. This is a first step in trying to actually boot images in CI. Signed-off-by: Ian Campbell <ijc@docker.com>
Previously (in linuxkit#8) building both seemed to timeout or otherwise fall foul of some sort of infra glitch. Try just building one for now. This is a first step in trying to actually boot images in CI. Signed-off-by: Ian Campbell <ijc@docker.com>
Previously (in linuxkit#8) building both seemed to timeout or otherwise fall foul of some sort of infra glitch. Try just building one for now. This is a first step in trying to actually boot images in CI. Signed-off-by: Ian Campbell <ijc@docker.com>
Previously (in linuxkit#8) building both seemed to timeout or otherwise fall foul of some sort of infra glitch. Try just building one for now. This is a first step in trying to actually boot images in CI. Signed-off-by: Ian Campbell <ijc@docker.com>
Previously (in linuxkit#8) building both seemed to timeout or otherwise fall foul of some sort of infra glitch. Try just building one for now. This is a first step in trying to actually boot images in CI. Signed-off-by: Ian Campbell <ijc@docker.com>
Previously (in linuxkit#8) building both seemed to timeout or otherwise fall foul of some sort of infra glitch. Try just building one for now. This is a first step in trying to actually boot images in CI. Signed-off-by: Ian Campbell <ijc@docker.com>
Previously (in linuxkit#8) building both seemed to timeout or otherwise fall foul of some sort of infra glitch. Try just building one for now. This is a first step in trying to actually boot images in CI. Signed-off-by: Ian Campbell <ijc@docker.com>
Previously (in linuxkit#8) building both seemed to timeout or otherwise fall foul of some sort of infra glitch. Try just building one for now. This is a first step in trying to actually boot images in CI. Signed-off-by: Ian Campbell <ijc@docker.com>
Build a set of images from the cross of cross of
KUBE_RUNTIME={docker,cri-containerd}
andKUBE_NETWORK={weave,bridge}
, build both BIOS and EFI images in all cases.WIP while I play with the CI setup and understand the various interactions of workspaces, docker image cache, artifact publishing from multiple jobs etc.
The previous used of
/workspace/images
is switched to/workspace/packages
to free upimages
for the VM images.