-
Notifications
You must be signed in to change notification settings - Fork 87
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
podvm: Understand and reduce podvm permutations #1890
Comments
Part 1 - Identify podvm builds
|
did you mean "Any downstream testing?" maybe we can have a "being tested" column mkosi_x86_64 should work on both AWS + Azure. Afaik all the packer images use cloud-init? |
For RHEL I meant testing of the upstream podvm build, but that testing itself could be manual testing, or testing in a downstream environment (as I'm pretty confident we don't have any upstream automated testing for RHEL). We have some documentation for it though. I hope that helps clarify?
Will do
These are both using process-user-data I believe and primarly fedora based in the upstream testing? |
not quite :) I guess we have either (automated) testing in the project or potentially "downstream" (e.g a vendor product that uses CAA). One could argue that untested images, if they are consumed and tested downstream should also be maintained downstream?
yes. I think we can just check for "cloud-init" yes/no. cloud-init will not work on dm-verity protected root-fs's. so we could also just check for dm-verity yes/no? |
none of the mkosi image is being tested atm, afaict |
amd64 azure packer image is being tested |
So the grey area that I was hinting at was for when pure upstream versions were tested internall. e.g. for ibmcloud, we tested the pure upstream version, but due to lack of publicly available resources those tests were done internally. I agree that if the versions are downstream then the downstream teams are responsible for maintenance (though we want to do our best to not break them, so it's potentially interesting). Sorry, I think I'm mostly overcomplicating an already complicated chart! |
Part 2 - Identify To-Be of podvm buildsAs the attempt to list out our "As-is" set of podvm images hasn't seemed to work, probably as it is too complex, maybe we should focus on out To-Be set and what we are aiming for in Step 2 of this work. My starting list of suggestions, based on quite a lot of ignorance is:
It would be good to understand where we are - particularly with libvirt and ibmcloud to understand what gaps we have in just trying out podvm builds. |
Hi @stevenhorsman ! What if we have an entry for Ubuntu/amd64/libvirt/packer/cloud-init for a while mkosi equivalent isn't running on CI? |
So I'm just trying to track and understand if anyone has already tried these combination above for, the short-term goals. It isn't designed to say that we will remove the Ubuntu builds until we have replacements. Ideally if/when someone confirms that Fedora/amd64/libvirt/mkosi/cloud-init works then, we'd start a parallel builds in our CI to create that version of podvms. |
AFAIK RHEL is supported only by packer builds and for AWS, Azure and libvirt, and there's no upstream testing for RHEL |
At the moment we have a matrix of 4 possible options for podvm (mkosi/packer) x (cloud-init/process-user-data). We then multiple this by base OSs too (ubuntu/fedora/rhel) (we will ignore OS version at the moment on the assumption we can sync on that?) and cloud-providers that can support it and it explodes quite a lot and becomes complicated to understand and test
We want to reduce this, so we can minimise differences and duplicated code. One possible plan is:
The text was updated successfully, but these errors were encountered: