-
Notifications
You must be signed in to change notification settings - Fork 257
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add ws2022 image/build to cri-containerd tests #1160
Conversation
Leaving this as a draft briefly because was curious on what's a better approach for the switch statement. We could take anything higher than ws2022 and just convert it to the ws2022 build so the switch will pick up the ws2022 image, or we can assign the build number to a variable and match anything that's V21H2 and higher to the ltsc2022 image(but then the rest of the switch is just odd). I've done both, one approach for the |
@ambarve Assigning you as we both share the constant pain of taking out that panic to run the tests locally 😂. Let me know what you think about the above. |
I realise it's not part of this PR (It was discussed in #1155), but seeing it here, particularly in context of a comment referencing "21h1", I wish the Since "21H2" is the point where all the Windows product lines are diverging their builds, it's probably a good point to avoid using that term, or alternatively, explicitly only using it for Windows Server in "containers" context. In that alternative case, then this PR's reference to '21H1' (the comment I just commented on, which is why I'm thinking about this now), should probably be "Windows 10 21H1", to be super-clear. It kind of depends on future plans: If there's going to be container base layers for the Azure Stack HCI releases, then the "alternative" makes sense, as there will be a need to refer to SAC versions in the future, and the mixed LTSC/SAC-versioning has been a bit confusing in container tags. However, if the next set of container base images are likely to be with the next Server LTSC release after Server 2022, which seems feasible due to the new kernel ABI stability work, then you won't need named Knowing those future plans might also help choose between the two approaches to handling this: Will the 'switch' slowly grow every six months; or is the long switch now "legacy" and in future it'll be the small set of |
@TBBle You've got an extremely fair point here and we're looking into it. Might be best to revert that V21H2 change until we know what our stance should be for this package/the constants going forward, things have gotten a bit more annoying now it seems :/. I think ideally we should segment the constants by client/server in some way now. @kevpar had suggested possibly having W10, W11, and WS prefixes to represent the Windows 10, Windows 11 and Windows Server build numbers respectively. We'll hopefully have an idea in the coming week |
@TBBle Small update, gonna sync with some folks tomorrow on this and hopefully figure out a plan for names in this package going forward. We'd really wanna get the naming for 21H2 right seeming as how it represents three entirely different build numbers now -_- and who knows what the future holds. |
This change adds a new case to the getWindowsServerCoreImage and getWindowsNanoserverImage functions to return ws2022 on a ws2022 or higher build. The higher case is because of some recent efforts to improve down-level compatability for Windows container images. For reference, the ltsc2022 image works on a win11 host without hypervisor isolation. Signed-off-by: Daniel Canter <dcanter@microsoft.com>
It used to read "The panic for "unsupported build" should only be triggered if the user is on a build older than RS5 or 21h1 now." This is a bit confusing and is more clear if this was worded as "if the user is on 21H1 or a build older than RS5". Signed-off-by: Daniel Canter <dcanter@microsoft.com>
9ee5593
to
9b237e4
Compare
In my opinion, I prefer the And I expect the first mapping will slowly grow over the next few LTSC releases, particularly if there are overlaps in supported image build versions. |
As three different builds of windows have the 21h2 moniker, use the server 21h2 definition Signed-off-by: Daniel Canter <dcanter@microsoft.com>
9b237e4
to
af9e01f
Compare
@ambarve pingaroo |
test/cri-containerd/main_test.go
Outdated
// https://techcommunity.microsoft.com/t5/containers/windows-server-2022-and-beyond-for-containers/ba-p/2712487) | ||
// the ltsc2022 image should continue to work on builds ws2022 and onwards. The panic for "unsupported build" | ||
// should only be triggered if the user is on 21h1 or a build older than RS5 now. | ||
if build > osversion.V21H2Server { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
For any release that is > V21H2Server we will always end up using the "nanoserver:ltsc2022" image. Right now this is fine because we don't have any such release but in the future this could cause problems. How about moving this if condition under the default
case? In default
case we check if build > osversion.V21H2Server and use the nanoserver:ltsc2022 image, otherwise we panic.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Sure that works for me
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Done @ambarve
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think I liked the previous way better. The future evolution in my mind was adding new if checks before that build > osversion
, so the "default" clause didn't grow too long and nested, since over time this will grow as well, as we eventually hit OS build versions that can't run the ltsc2022 image, whenever that happens. It also loses the separation of "host build ➡ image build" mapping from the "image build ➡ MCR repository string", which I liked about that layout.
Structuring it the way it is after 6388b44, I think it'd be clearer to go with the getWindowsServercoreImage
approach before af9e01f instead, because that's what we've kind-of done, except using default
to add an extra level of nesting for the non ==
cases.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I am not sure I understand your point about growing the "default" clause. As new OS builds and corresponding images are released, we will add a new "case" for those in the switch statement and if your machine doesn't match any of those we get into the default case which can remain as it is for at least next few years.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The general idea is we try and include every new server related container image in this switch. Let's say you wanted to run this on a build that doesn't have a container image available (windows 11 insider build for example), then currently you can't as we'll just panic a couple lines in. This logic as least makes it so that if there's a container image that matches your host exactly, we'll pull that and if not and your build is higher than server 2022 where the abi should be stable, we can use the ws2022 image.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Also never thought I'd see the day Python and Go become one 😝
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I fixed my def
to be func
. I'd been wondering why the syntax highlighting was not colouring that in...
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Let me explain my idea behind this change: When running our tests on preview builds (for which official container images aren't available yet) our tests panic. For any builds > V21H2 we can use the ltsc2022 image and continue tests instead of panicking. Notice, that when we are working with, say V25H2 preview builds, we will already have official container images for the build before that and those images would be used (via the default case) to run the tests (i.e we don't need to stick to V21H2 builds if V23H2 build is available).
So, from here onwards, every time an official container image is released, we will add a case for that in the switch and use that in our default "if build > V21H2" condition. Does that make sense?
Having said that, I like the idea of splitting the build tags from the repositories so that every time we add a new build, we have to update only 1 function instead of updating the two functions that we currently have. (However, I agree that this is just a test code and so I am fine even if we don't make this change right now).
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
That makes sense to me, and is actually why I suggested this structure, to break the logic of "latest suitable container image version" out of "map build to tag". My thinking is that the getContainerBuildForOSBuild
in the situation you've described is
func getContainerBuildForOSBuild(build uint16) uint16 {
if build >= osversion.V23H2Server {
return osversion.V23H2Server
} else if build >= osversion.V21H2Server {
return osversion.V21H2Server
}
return build
}
so in-effect, unknown builds after the "downlevel support point" (osversion.V21H2Server) use the latest known image version, assuming that some time previously, osversion.V23H2Server
was added to getContainerTagForContainerBuild
when it became available.
And having put it like that, now I'm thinking what we could have (but it seems like over-engineering and at this point the conversation has probably consumed as much brain-power as all future updates to this code anyway...) would be a list of known container image builds, and you just search the list for the OS build you have, and take the highest-but-not-greater-than match. And then if the result was not an exact match, and is less than osversion.V21H2Server, panic
.
And having put it like that, I realise that's roughly what the getWindowsServercoreImage
would have evolved into, if we reversed it to be latest-first:
func getContainerTagForOSBuild(build uint16) string {
switch b := build; {
case b >= osversion.V23H2Server: // Illustrative case for how this function grows over time
return "ltsc2024"
case b >= osversion.V21H2Server:
return "ltsc2022"
case b == osversion.V20H2:
return "2009"
case b == osversion.V20H1:
return "2004"
case b == osversion.V19H2:
return "1909"
case b == osversion.V19H1:
return "1903"
case b == osversion.RS5:
return "1809"
default:
panic("unsupported build")
}
}
func getWindowsServerCoreImage(build uint16) string {
tag := getContainerTagForOSBuild(build)
return "mcr.microsoft.com/windows/servercore:" + tag
}
func getWindowsNanoserverImage(build uint16) string {
tag := getContainerTagForOSBuild(build)
return "mcr.microsoft.com/windows/nanoserver:"+ tag
}
And reversing my thought on #1160 (comment), now I like this way better (switch
as cleaner chained-if-else
), as it succinctly expresses the behaviour.
Looking back, perhaps my faulty assumption is that after ltsc2024 images are created, we still want to support intermediate builds that are not ltsc2022, but use the ltsc2022 base image. That's the reason I see the default
clause in the current code growing eternally, and want to promote it to be the primary behaviour, rather than being three nestings deep. If we're only going to keep the one >
match, then I'm trying to solve the wrong problem. ^_^
Also, please don't let this discussion block landing this work. ^_^
Move build check to default case. Signed-off-by: Daniel Canter <dcanter@microsoft.com>
@ambarve Let me know if this looks good |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM!
There was a leftover block of code that assigned the build to ltsc2022 at the top of getWindowsServerCoreImage, but it was also assigned in the default case of the switch statement. Signed-off-by: Daniel Canter <dcanter@microsoft.com>
Thanks @ambarve! Realized there was a duplication in the |
* Add ws2022 image/build to cri-integration tests This change adds a new case to the getWindowsServerCoreImage and getWindowsNanoserverImage functions to return ws2022 on a ws2022 or higher build. The higher case is because of some recent efforts to improve down-level compatability for Windows container images. For reference, the ltsc2022 image works on a win11 host without hypervisor isolation. Signed-off-by: Daniel Canter <dcanter@microsoft.com> (cherry picked from commit f099e34) Signed-off-by: Daniel Canter <dcanter@microsoft.com>
* Add ws2022 image/build to cri-integration tests This change adds a new case to the getWindowsServerCoreImage and getWindowsNanoserverImage functions to return ws2022 on a ws2022 or higher build. The higher case is because of some recent efforts to improve down-level compatability for Windows container images. For reference, the ltsc2022 image works on a win11 host without hypervisor isolation. Signed-off-by: Daniel Canter <dcanter@microsoft.com> (cherry picked from commit f099e34) Signed-off-by: Daniel Canter <dcanter@microsoft.com>
[release/0.9] Add ws2022 image/build to cri-containerd tests (#1160)
This change adds a new case to the getWindowsServerCoreImage and
getWindowsNanoserverImage functions to return ws2022 on a ws2022 or higher
build. The higher case is because of some recent efforts to improve down-level
compatability for Windows container images. For reference, the ltsc2022 image
works on a win11 host without hypervisor isolation.
Signed-off-by: Daniel Canter dcanter@microsoft.com