-
Notifications
You must be signed in to change notification settings - Fork 14.7k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Kubernetes v1.33 Mid Cycle Sneak Peek Blog #50111
Conversation
✅ Pull request preview available for checkingBuilt without sensitive environment variables
To edit notification comments on pull requests, go to your Netlify site configuration. |
Sorry for the delay with the write-up, we have put together the initial draft which is now ready for review 🙇 We were working in a separate Google Doc, and it ended up with full of edits and comments, which may make it more difficult to review. I am keeping this a simple PR for the review, but if it would be beneficial to create a separate interactive doc, I can surely do that 👍 Ping @natalisucks @katcosgrove /hold |
content/en/blog/_posts/2025-03-24-kubernetes-1.33-sneak-peek.md
Outdated
Show resolved
Hide resolved
content/en/blog/_posts/2025-03-24-kubernetes-1.33-sneak-peek.md
Outdated
Show resolved
Hide resolved
content/en/blog/_posts/2025-03-24-kubernetes-1.33-sneak-peek.md
Outdated
Show resolved
Hide resolved
content/en/blog/_posts/2025-03-24-kubernetes-1.33-sneak-peek.md
Outdated
Show resolved
Hide resolved
content/en/blog/_posts/2025-03-24-kubernetes-1.33-sneak-peek.md
Outdated
Show resolved
Hide resolved
content/en/blog/_posts/2025-03-24-kubernetes-1.33-sneak-peek.md
Outdated
Show resolved
Hide resolved
content/en/blog/_posts/2025-03-24-kubernetes-1.33-sneak-peek.md
Outdated
Show resolved
Hide resolved
content/en/blog/_posts/2025-03-24-kubernetes-1.33-sneak-peek.md
Outdated
Show resolved
Hide resolved
content/en/blog/_posts/2025-03-24-kubernetes-1.33-sneak-peek.md
Outdated
Show resolved
Hide resolved
content/en/blog/_posts/2025-03-24-kubernetes-1.33-sneak-peek.md
Outdated
Show resolved
Hide resolved
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@rytswd nice piece!
Just found probably a wrong link copy paste. 👍🏻
content/en/blog/_posts/2025-03-24-kubernetes-1.33-sneak-peek.md
Outdated
Show resolved
Hide resolved
Thanks for the review @dipesh-rawat @graz-dev ! I have applied all the suggestions so far 👍 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Some grammatical nits. Looks great otherwise!
|
||
### In-Place vertical Pod scalability with mutable PodSpec for resources ([KEP-1287](https://kep.k8s.io/1287)) | ||
|
||
When provisioning a Pod, you can use various resources such as Deployment, StatefulSet, etc. Scalability requirements may need horizontal scaling by updating the Pod replica count, or vertical scaling by updating resources allocated to Pod’s container(s). Currently, As PodSpec’s Container Resources are immutable, updating Pod’s container resources results in Pod restarts. But what if we could dynamically update the resource configuration for Pods without restarting them? |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
When provisioning a Pod, you can use various resources such as Deployment, StatefulSet, etc. Scalability requirements may need horizontal scaling by updating the Pod replica count, or vertical scaling by updating resources allocated to Pod’s container(s). Currently, As PodSpec’s Container Resources are immutable, updating Pod’s container resources results in Pod restarts. But what if we could dynamically update the resource configuration for Pods without restarting them? | |
When provisioning a Pod, you can use various resources such as Deployment, StatefulSet, etc. Scalability requirements may need horizontal scaling by updating the Pod replica count, or vertical scaling by updating resources allocated to a Pod’s container(s). Currently, As PodSpec’s Container Resources are immutable, updating a Pod’s container resources results in Pod restarts. But what if we could dynamically update the resource configuration for Pods without restarting them? |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I find the singularity / plurality with "a Pod's container(s)" is quite confusing (and the original wording is already more complex than I like). What do you think updating this to something like the following instead?
When provisioning a Pod, you can use various resources such as Deployment, StatefulSet, etc. Scalability requirements may need horizontal scaling by updating the Pod replica count, or vertical scaling by updating resources allocated to Pod’s container(s). Currently, As PodSpec’s Container Resources are immutable, updating Pod’s container resources results in Pod restarts. But what if we could dynamically update the resource configuration for Pods without restarting them? | |
When provisioning a Pod, you can use various resources such as Deployment, StatefulSet, etc. Scalability requirements may need horizontal scaling by updating the Pod replica count, or vertical scaling by updating container resources allocated to the Pod. Currently, As PodSpec’s Container Resources are immutable, updating any of the Pod’s container resources results in Pod restarts. But what if we could dynamically update the resource configuration for Pods without restarting them? |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Try not to write "PodSpec"; we prefer spec
in backticks separate from Pod in UpperCamelCase.
PodSpec is mostly something you see either as part of the OpenAPI document or in the source code. People operating Kubernetes see spec
and Pod
within manifests and often wouldn't see PodSpec
at all.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Makes sense, I took the KEP reference directly, but it surely sounds more user friendly to simply use spec
.
When provisioning a Pod, you can use various resources such as Deployment, StatefulSet, etc. Scalability requirements may need horizontal scaling by updating the Pod replica count, or vertical scaling by updating resources allocated to Pod’s container(s). Currently, As PodSpec’s Container Resources are immutable, updating Pod’s container resources results in Pod restarts. But what if we could dynamically update the resource configuration for Pods without restarting them? | |
When provisioning a Pod, you can use various resources such as Deployment, StatefulSet, etc. Scalability requirements may need horizontal scaling by updating the Pod replica count, or vertical scaling by updating container resources allocated to the Pod. Currently, since Pod's `spec.containers.resources` are immutable, updating any of the Pod’s container resources results in Pod restarts. But what if we could dynamically update the resource configuration for Pods without restarting them? |
I am not sure if spec.containers.resources
is appropriate, though. I think it would be an overkill to do jq syntax of spec.containers[].resources[]
?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Your last suggestion looks good. Better spec.containers.resources
.
If you want to make it readable is can suggest something like:
When provisioning a Pod, you can use various resources such as Deployment, StatefulSet, etc. Scalability requirements may need horizontal scaling by updating the Pod replica count, or vertical scaling by updating resources allocated to Pod’s container(s). Currently, As PodSpec’s Container Resources are immutable, updating Pod’s container resources results in Pod restarts. But what if we could dynamically update the resource configuration for Pods without restarting them? | |
When provisioning a Pod, you can use various resources such as Deployment, StatefulSet, etc. Scalability requirements may need horizontal scaling by updating the Pod replica count, or vertical scaling by updating resources allocated to Pod’s container(s). Currently, since container resources defined in Pod's `spec` are immutable, updating any of them results in Pod restarts. But what if we could dynamically update the resource configuration for Pods without restarting them? |
content/en/blog/_posts/2025-03-24-kubernetes-1.33-sneak-peek.md
Outdated
Show resolved
Hide resolved
content/en/blog/_posts/2025-03-24-kubernetes-1.33-sneak-peek.md
Outdated
Show resolved
Hide resolved
content/en/blog/_posts/2025-03-24-kubernetes-1.33-sneak-peek.md
Outdated
Show resolved
Hide resolved
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@rytswd I added some comments to improve the readability also for unfamiliar readers.
Then I suggest some fixes to stay on track with other "sneak pakes" published for previous releases (see: https://kubernetes.io/blog/2024/11/08/kubernetes-1-32-upcoming-changes/)
content/en/blog/_posts/2025-03-24-kubernetes-1.33-sneak-peek.md
Outdated
Show resolved
Hide resolved
content/en/blog/_posts/2025-03-24-kubernetes-1.33-sneak-peek.md
Outdated
Show resolved
Hide resolved
content/en/blog/_posts/2025-03-24-kubernetes-1.33-sneak-peek.md
Outdated
Show resolved
Hide resolved
content/en/blog/_posts/2025-03-24-kubernetes-1.33-sneak-peek.md
Outdated
Show resolved
Hide resolved
|
||
The [EndpointSlices](/docs/concepts/services-networking/endpoint-slices/) API has been stable since v1.21, which effectively replaced the original Endpoints API. The original Endpoints API was simple and straightforward, but also posed some challenges when scaling to large numbers of network endpoints. There have been new Service features only added to EndpointSlices API such as dual-stack networking, making the original Endpoints API ready for deprecation. | ||
|
||
This deprecation only impacts only those who use the Endpoints API directly from workloads or scripts; these users should migrate to use EndpointSlices instead. You can find more about the deprecation implications and migration plans in a dedicated blog post [Endpoints formally deprecated in favor of EndpointSlices](TBC). |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Are you planning to maintain this blog post or to have the "Endpoints formally deprecated in favor of EndpointSlices" before this piece is published? If not remove the reverence to a TBC blog post.
If the "Endpoints formally deprecated in favor of EndpointSlices" will be published before this one I think that the best option is:
This deprecation only impacts only those who use the Endpoints API directly from workloads or scripts; these users should migrate to use EndpointSlices instead. You can find more about the deprecation implications and migration plans in a dedicated blog post [Endpoints formally deprecated in favor of EndpointSlices](TBC). | |
This deprecation only impacts only those who use the Endpoints API directly from workloads or scripts; these users should migrate to use EndpointSlices instead. You can find more about the deprecation implications and migration plans in a [dedicated blog post](TBC). |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes, the idea is to have a dedicated blog post, before this one goes out. But that one is still in draft, and may be tight to get it released before the mid cycle blog goes out. I'll keep it as is for now, but will update according to your suggestion later (this PR is already on hold)
|
||
Following its deprecation in v1.31, as highlighted in the [release announcement](/blog/2024/07/19/kubernetes-1-31-upcoming-changes/#deprecation-of-status-nodeinfo-kubeproxyversion-field-for-nodes-kep-4004-https-github-com-kubernetes-enhancements-issues-4004), the `.status.nodeInfo.kubeProxyVersion` field will be removed in v1.33. This field was set by kubelet, but its value was not consistently accurate. As it has been disabled by default since v1.31, the v1.33 release will remove this field entirely. | ||
|
||
### Host network support for Windows pods ([KEP-3503](https://kep.k8s.io/3503)) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
### Host network support for Windows pods ([KEP-3503](https://kep.k8s.io/3503)) | |
### Host network support for Windows pods |
Add the reference to the KEP in the paragraph instead of into the title.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
### Host network support for Windows pods ([KEP-3503](https://kep.k8s.io/3503)) | |
### Removal of host network support for Windows pods |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Ditto this will be updated in a separate commit
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This has been updated with db248bc
|
||
The following list of enhancements is likely to be included in the upcoming v1.33 release. This is not a commitment and the release content is subject to change. | ||
|
||
### In-Place vertical Pod scalability with mutable PodSpec for resources ([KEP-1287](https://kep.k8s.io/1287)) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
### In-Place vertical Pod scalability with mutable PodSpec for resources ([KEP-1287](https://kep.k8s.io/1287)) | |
### In-Place vertical Pod scalability with mutable PodSpec for resources |
Add the reference to the KEP in the paragraph instead of into the title.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
### In-Place vertical Pod scalability with mutable PodSpec for resources ([KEP-1287](https://kep.k8s.io/1287)) | |
### Improvements to in-place vertical scaling for Pods |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Ditto I will update the wording here, and add KEP reference in the paragraph below.
There was a Feature Blog back in v1.27 when it made to alpha, and its title was "In-place Resource Resize for Kubernetes Pods". I think I'll write something similar, like "In-place resource resize for vertical scaling of Pods"?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This has been updated with 9e4ca8c
The [KEP-1287](https://kep.k8s.io/1287) is precisely to allow such in-place Pod updates. It opens up various possibilities of vertical scale-up for stateful processes without any downtime, seamless scale-down when the process has only little traffic, and even allocating larger resources during the startup and eventually lowering once the initial setup is complete. This has been released as alpha in v1.27, and is expected to land as beta in v1.33. | ||
|
||
|
||
### DRA’s ResourceClaim Device Status graduates to beta ([KEP-4817](https://kep.k8s.io/4817)) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
### DRA’s ResourceClaim Device Status graduates to beta ([KEP-4817](https://kep.k8s.io/4817)) | |
### DRA’s ResourceClaim Device Status graduates to beta |
Add the reference to the KEP in the paragraph instead of into the title.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Watch out for implying that the graduation is definitely going to happen. We don't make promises in the mid-cycle blog unless SIG Architecture would confirm the promise has been made.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The release stage may be the most important piece of information for users, and I don't see how else we can highlight these beta / stable features. We have a disclaimer of how things could change before the actual release as well, so this shouldn't read as a promise, but what we think is worth highlighting given its high probability of making it a part of the release.
As the KEP is tracked for code freeze, should we keep this as is, and drop the whole section if the code freeze situation changes? Also, we could potentially make a note of this in the Release Announcement if the situation changed from the mid cycle blog.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The KEP link has been moved out from the heading with e0f4df1
You can find more information in [Dynamic Resource Allocation: ResourceClaim Device Status](/docs/concepts/scheduling-eviction/dynamic-resource-allocation/#resourceclaim-device-status). | ||
|
||
|
||
### Ordered Namespace Deletion ([KEP-5080](https://kep.k8s.io/5080)) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
### Ordered Namespace Deletion ([KEP-5080](https://kep.k8s.io/5080)) | |
### Ordered Namespace Deletion |
Add the reference to the KEP in the paragraph instead of into the title.
This KEP introduces a more structured deletion process for Kubernetes namespace to ensure secure and deterministic resource removal. The current semi-random deletion order can create security gaps or unintended behaviour, such as Pods persisting after their associated NetworkPolicies are deleted. By enforcing a structured deletion sequence that respects logical and security dependencies, this approach ensures Pods are removed before other resources. The design improves Kubernetes’s security and reliability by mitigating risks associated with non-deterministic deletions. | ||
|
||
|
||
### Enhancements to Kubernetes Job Management and Persistent Volume Policies ([KEP-3850](https://kep.k8s.io/3850), [KEP-3998](https://kep.k8s.io/3998), [KEP-2644](https://kep.k8s.io/2644)) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
### Enhancements to Kubernetes Job Management and Persistent Volume Policies ([KEP-3850](https://kep.k8s.io/3850), [KEP-3998](https://kep.k8s.io/3998), [KEP-2644](https://kep.k8s.io/2644)) | |
### Enhancements to Kubernetes Job Management and Persistent Volume Policies |
Add the reference to the KEP in the paragraph instead of into the title.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Along with the wording update, I have moved the KEP links from the heading with d9da762
content/en/blog/_posts/2025-03-24-kubernetes-1.33-sneak-peek.md
Outdated
Show resolved
Hide resolved
content/en/blog/_posts/2025-03-24-kubernetes-1.33-sneak-peek.md
Outdated
Show resolved
Hide resolved
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Took most of the suggestions, but a few things left as is for now
content/en/blog/_posts/2025-03-24-kubernetes-1.33-sneak-peek.md
Outdated
Show resolved
Hide resolved
content/en/blog/_posts/2025-03-24-kubernetes-1.33-sneak-peek.md
Outdated
Show resolved
Hide resolved
|
||
## Deprecations and removals for Kubernetes v1.33 | ||
|
||
### Deprecate v1.Endpoints ([KEP-4974](https://kep.k8s.io/4974)) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'm a bit torn about this one -- it is true that you can check the paragraph, and that's the point of having these blogs, making it easier for readers of any level to understand upcoming changes. But for those readers with technical understanding, it would be useful to check out the KEPs to find more.
This is my personal take, but I think KEP is such a great asset Kubernetes community has, and want to make it as accessible as possible. I could take this out from the title, and perhaps put it at the bottom of each section, saying something like "If you want to find more about this, read this KEP" -- what do you think?
|
||
The [EndpointSlices](/docs/concepts/services-networking/endpoint-slices/) API has been stable since v1.21, which effectively replaced the original Endpoints API. The original Endpoints API was simple and straightforward, but also posed some challenges when scaling to large numbers of network endpoints. There have been new Service features only added to EndpointSlices API such as dual-stack networking, making the original Endpoints API ready for deprecation. | ||
|
||
This deprecation only impacts only those who use the Endpoints API directly from workloads or scripts; these users should migrate to use EndpointSlices instead. You can find more about the deprecation implications and migration plans in a dedicated blog post [Endpoints formally deprecated in favor of EndpointSlices](TBC). |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes, the idea is to have a dedicated blog post, before this one goes out. But that one is still in draft, and may be tight to get it released before the mid cycle blog goes out. I'll keep it as is for now, but will update according to your suggestion later (this PR is already on hold)
|
||
Windows Pod networking aimed to achieve feature parity with Linux and provide better cluster density by allowing containers to use Node’s networking namespace. The original implementation landed as alpha with v1.26, but as it faced unexpected containerd behaviours, and alternative solutions were available, it has been decided that the KEP will be withdrawn and the code removed in v1.33. | ||
|
||
## Sneak peek of Kubernetes v1.33 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think I'd keep this as "sneak peek", because, at this point, we don't yet know if these changes would actually land in the v1.33.
"Upcoming changes" may be a good one, but I'm wondering if it loses a bit of fun sense?
|
||
### In-Place vertical Pod scalability with mutable PodSpec for resources ([KEP-1287](https://kep.k8s.io/1287)) | ||
|
||
When provisioning a Pod, you can use various resources such as Deployment, StatefulSet, etc. Scalability requirements may need horizontal scaling by updating the Pod replica count, or vertical scaling by updating resources allocated to Pod’s container(s). Currently, As PodSpec’s Container Resources are immutable, updating Pod’s container resources results in Pod restarts. But what if we could dynamically update the resource configuration for Pods without restarting them? |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I find the singularity / plurality with "a Pod's container(s)" is quite confusing (and the original wording is already more complex than I like). What do you think updating this to something like the following instead?
When provisioning a Pod, you can use various resources such as Deployment, StatefulSet, etc. Scalability requirements may need horizontal scaling by updating the Pod replica count, or vertical scaling by updating resources allocated to Pod’s container(s). Currently, As PodSpec’s Container Resources are immutable, updating Pod’s container resources results in Pod restarts. But what if we could dynamically update the resource configuration for Pods without restarting them? | |
When provisioning a Pod, you can use various resources such as Deployment, StatefulSet, etc. Scalability requirements may need horizontal scaling by updating the Pod replica count, or vertical scaling by updating container resources allocated to the Pod. Currently, As PodSpec’s Container Resources are immutable, updating any of the Pod’s container resources results in Pod restarts. But what if we could dynamically update the resource configuration for Pods without restarting them? |
content/en/blog/_posts/2025-03-24-kubernetes-1.33-sneak-peek.md
Outdated
Show resolved
Hide resolved
I think I incorporated all of the suggestions so far, or left a comment to discuss further. Please feel free to add more comments / suggestions as you find more! |
content/en/blog/_posts/2025-03-24-kubernetes-1.33-sneak-peek.md
Outdated
Show resolved
Hide resolved
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Looks great team!
One thing I'd add is the expected release date for v1.33 but that's it :)
content/en/blog/_posts/2025-03-24-kubernetes-1.33-sneak-peek.md
Outdated
Show resolved
Hide resolved
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Looks good to me! Great job everyone!
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Its looking great, just suggesting some small grammatical nits.
content/en/blog/_posts/2025-03-24-kubernetes-1.33-sneak-peek.md
Outdated
Show resolved
Hide resolved
|
||
* Beta or pre-release API versions must be supported for 3 releases after the deprecation. | ||
|
||
* Alpha or experimental API versions may be removed in any release without prior deprecation notice; this process can become a withdrawal in cases where a different implementation for the same feature is already in place. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
* Alpha or experimental API versions may be removed in any release without prior deprecation notice; this process can become a withdrawal in cases where a different implementation for the same feature is already in place. | |
* Alpha or experimental API versions may be removed in any release without prior deprecation notice; this process can become a withdrawal in cases where a different implementation for the same feature is already in place. |
Can we simplify this line ? Does it mean if there is different implementation for the same feature already exists, then Alpha or experimental API versions may be removed in any release ?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The nature of alpha features is that they can be removed at any point, with the process called "withdrawal" rather than deprecation. My understanding is that it's essentially the same process, but given it's only alpha, there is no guarantee that it would be supported in future releases.
This wording is something I inherited from previous cycles, and while that's not a reason to keep things unchanged, I personally found this relatively clear and straightforward. I'm open for suggestions, though!
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This sentence is OK by my side.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
right. alright . thanks for your work and explanation.
content/en/blog/_posts/2025-03-24-kubernetes-1.33-sneak-peek.md
Outdated
Show resolved
Hide resolved
content/en/blog/_posts/2025-03-24-kubernetes-1.33-sneak-peek.md
Outdated
Show resolved
Hide resolved
content/en/blog/_posts/2025-03-24-kubernetes-1.33-sneak-peek.md
Outdated
Show resolved
Hide resolved
content/en/blog/_posts/2025-03-24-kubernetes-1.33-sneak-peek.md
Outdated
Show resolved
Hide resolved
content/en/blog/_posts/2025-03-24-kubernetes-1.33-sneak-peek.md
Outdated
Show resolved
Hide resolved
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks. Here's some further feedback on behalf of the blog team.
I've added a small number of corrective comments on the existing review from @graz-dev but overall please do pay attention to Graziano's feedback - it looks appropriate and relevant.
|
||
## Deprecations and removals for Kubernetes v1.33 | ||
|
||
### Deprecate v1.Endpoints ([KEP-4974](https://kep.k8s.io/4974)) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Try this:
### Deprecate v1.Endpoints ([KEP-4974](https://kep.k8s.io/4974)) | |
## Deprecation of the stable Endpoints API |
This is not really an enhancement, unlike some of the other things we're giving a sneak peek into.
|
||
This deprecation only impacts only those who use the Endpoints API directly from workloads or scripts; these users should migrate to use EndpointSlices instead. You can find more about the deprecation implications and migration plans in a dedicated blog post [Endpoints formally deprecated in favor of EndpointSlices](TBC). | ||
|
||
### Deprecate `status.nodeInfo.kubeProxyVersion` field ([KEP-4004](https://kep.k8s.io/4004)) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I agree with @graz-dev - omit the hyperlink here. Links within headings don't work well, although we do sometimes use them.
I'd write:
### Deprecate `status.nodeInfo.kubeProxyVersion` field ([KEP-4004](https://kep.k8s.io/4004)) | |
### Removal of kube-proxy version information in node status |
I'm afraid to point it out, but: the existing heading is almost misleading: people might think we're deprecating the field, not removing a deprecated field.
|
||
Following its deprecation in v1.31, as highlighted in the [release announcement](/blog/2024/07/19/kubernetes-1-31-upcoming-changes/#deprecation-of-status-nodeinfo-kubeproxyversion-field-for-nodes-kep-4004-https-github-com-kubernetes-enhancements-issues-4004), the `.status.nodeInfo.kubeProxyVersion` field will be removed in v1.33. This field was set by kubelet, but its value was not consistently accurate. As it has been disabled by default since v1.31, the v1.33 release will remove this field entirely. | ||
|
||
### Host network support for Windows pods ([KEP-3503](https://kep.k8s.io/3503)) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
### Host network support for Windows pods ([KEP-3503](https://kep.k8s.io/3503)) | |
### Removal of host network support for Windows pods |
content/en/blog/_posts/2025-03-24-kubernetes-1.33-sneak-peek.md
Outdated
Show resolved
Hide resolved
|
||
### DRA’s ResourceClaim Device Status graduates to beta ([KEP-4817](https://kep.k8s.io/4817)) | ||
|
||
The `Devices` field in `ResourceClaim.Status`, introduced in v1.32, graduates to beta in v1.33. This field allows drivers to report device status data, improving both observability and troubleshooting capabilities. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The `Devices` field in `ResourceClaim.Status`, introduced in v1.32, graduates to beta in v1.33. This field allows drivers to report device status data, improving both observability and troubleshooting capabilities. | |
The `devices` field in with ResourceClaim `status`, originally introduced in the v1.32 release, is likely to graduate to beta in v1.33. This field allows drivers to report device status data, improving both observability and troubleshooting capabilities. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks, I will slightly update your suggestion with the following:
The `Devices` field in `ResourceClaim.Status`, introduced in v1.32, graduates to beta in v1.33. This field allows drivers to report device status data, improving both observability and troubleshooting capabilities. | |
The `devices` field in ResourceClaim `status`, originally introduced in the v1.32 release, is likely to graduate to beta in v1.33. This field allows drivers to report device status data, improving both observability and troubleshooting capabilities. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The wording has been updated with the above snippet with f18e1a4
content/en/blog/_posts/2025-03-24-kubernetes-1.33-sneak-peek.md
Outdated
Show resolved
Hide resolved
|
||
### In-Place vertical Pod scalability with mutable PodSpec for resources ([KEP-1287](https://kep.k8s.io/1287)) | ||
|
||
When provisioning a Pod, you can use various resources such as Deployment, StatefulSet, etc. Scalability requirements may need horizontal scaling by updating the Pod replica count, or vertical scaling by updating resources allocated to Pod’s container(s). Currently, As PodSpec’s Container Resources are immutable, updating Pod’s container resources results in Pod restarts. But what if we could dynamically update the resource configuration for Pods without restarting them? |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Try not to write "PodSpec"; we prefer spec
in backticks separate from Pod in UpperCamelCase.
PodSpec is mostly something you see either as part of the OpenAPI document or in the source code. People operating Kubernetes see spec
and Pod
within manifests and often wouldn't see PodSpec
at all.
content/en/blog/_posts/2025-03-24-kubernetes-1.33-sneak-peek.md
Outdated
Show resolved
Hide resolved
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
If you can wrap the Markdown source, that'll help localization teams.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Noted, thanks! I'll make the adjustment after handling other suggestions
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
(this wrapping can come post-merge or post-publication, any anyone from release comms can propose the change)
Thanks for all the comments, and sorry for not acting on them yet as I'm a bit swamped with personal matters at the moment 🙇 I'll make sure to handle all the actions by EOD today |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks for all the reviews!
For the simplicity sake, I have replied to most of the discussions with singular first person view. But this has been a team effort to get everything in place, so I would like to make sure that credits go out to all the Comms team as well ⭐
As mentioned in some comments, I am following up with separate commits to do some more clean up and rewording.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Noted, thanks! I'll make the adjustment after handling other suggestions
|
||
* Beta or pre-release API versions must be supported for 3 releases after the deprecation. | ||
|
||
* Alpha or experimental API versions may be removed in any release without prior deprecation notice; this process can become a withdrawal in cases where a different implementation for the same feature is already in place. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The nature of alpha features is that they can be removed at any point, with the process called "withdrawal" rather than deprecation. My understanding is that it's essentially the same process, but given it's only alpha, there is no guarantee that it would be supported in future releases.
This wording is something I inherited from previous cycles, and while that's not a reason to keep things unchanged, I personally found this relatively clear and straightforward. I'm open for suggestions, though!
|
||
## Deprecations and removals for Kubernetes v1.33 | ||
|
||
### Deprecate v1.Endpoints ([KEP-4974](https://kep.k8s.io/4974)) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I got some clarification about the URL in heading is to be avoided in the SIG Docs call, as it can cause trouble for accessibility needs such as screen reader. The documentation style guide has a mention about this (which was apparently recently added), and I will make sure to comply with that guideline in a separate commit.
https://kubernetes.io/docs/contribute/style/style-guide/#headings
The original heading wording was based on the exact title of the KEP issue. This was intended to ensure the community will have a clear idea of which KEP we are talking about here, especially for deprecation.
I think I will update the title as you suggested, and add a note about the KEP at the end of the paragraph.
content/en/blog/_posts/2025-03-24-kubernetes-1.33-sneak-peek.md
Outdated
Show resolved
Hide resolved
|
||
This deprecation only impacts only those who use the Endpoints API directly from workloads or scripts; these users should migrate to use EndpointSlices instead. You can find more about the deprecation implications and migration plans in a dedicated blog post [Endpoints formally deprecated in favor of EndpointSlices](TBC). | ||
|
||
### Deprecate `status.nodeInfo.kubeProxyVersion` field ([KEP-4004](https://kep.k8s.io/4004)) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
As mentioned in the other comment, the original heading is the exact copy of the KEP title. I will rephrase this, but also keep the KEP title somewhere in the paragraph below.
The [KEP-1287](https://kep.k8s.io/1287) is precisely to allow such in-place Pod updates. It opens up various possibilities of vertical scale-up for stateful processes without any downtime, seamless scale-down when the process has only little traffic, and even allocating larger resources during the startup and eventually lowering once the initial setup is complete. This has been released as alpha in v1.27, and is expected to land as beta in v1.33. | ||
|
||
|
||
### DRA’s ResourceClaim Device Status graduates to beta ([KEP-4817](https://kep.k8s.io/4817)) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The release stage may be the most important piece of information for users, and I don't see how else we can highlight these beta / stable features. We have a disclaimer of how things could change before the actual release as well, so this shouldn't read as a promise, but what we think is worth highlighting given its high probability of making it a part of the release.
As the KEP is tracked for code freeze, should we keep this as is, and drop the whole section if the code freeze situation changes? Also, we could potentially make a note of this in the Release Announcement if the situation changed from the mid cycle blog.
|
||
### DRA’s ResourceClaim Device Status graduates to beta ([KEP-4817](https://kep.k8s.io/4817)) | ||
|
||
The `Devices` field in `ResourceClaim.Status`, introduced in v1.32, graduates to beta in v1.33. This field allows drivers to report device status data, improving both observability and troubleshooting capabilities. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks, I will slightly update your suggestion with the following:
The `Devices` field in `ResourceClaim.Status`, introduced in v1.32, graduates to beta in v1.33. This field allows drivers to report device status data, improving both observability and troubleshooting capabilities. | |
The `devices` field in ResourceClaim `status`, originally introduced in the v1.32 release, is likely to graduate to beta in v1.33. This field allows drivers to report device status data, improving both observability and troubleshooting capabilities. |
content/en/blog/_posts/2025-03-24-kubernetes-1.33-sneak-peek.md
Outdated
Show resolved
Hide resolved
This KEP introduces a more structured deletion process for Kubernetes namespaces to ensure secure and deterministic resource removal. The current semi-random deletion order can create security gaps or unintended behaviour, such as Pods persisting after their associated NetworkPolicies are deleted. By enforcing a structured deletion sequence that respects logical and security dependencies, this approach ensures Pods are removed before other resources. The design improves Kubernetes’s security and reliability by mitigating risks associated with non-deterministic deletions. | ||
|
||
|
||
### Enhancements to Kubernetes Job Management and Persistent Volume Policies ([KEP-3850](https://kep.k8s.io/3850), [KEP-3998](https://kep.k8s.io/3998), [KEP-2644](https://kep.k8s.io/2644)) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Let me drop the PVC update here, and instead focus on update on Jobs only. The two KEPs are very closely related as they are both about indexed jobs, and that would probably flow more naturally.
- Follow us on Bluesky [@Kubernetesio](https://bsky.app/profile/kubernetes.io) for the latest updates | ||
- Join the community discussion on [Discuss](https://discuss.kubernetes.io/) | ||
- Join the community on [Slack](http://slack.k8s.io/) | ||
- Post questions (or answer questions) on [Stack Overflow](http://stackoverflow.com/questions/tagged/kubernetes) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Oh good to know -- should we keep both? Or just Server Fault?
Both
- Post questions (or answer questions) on [Stack Overflow](http://stackoverflow.com/questions/tagged/kubernetes) | |
- Post questions (or answer questions) on [Server Fault](https://serverfault.com/questions/tagged/kubernetes) or [Stack Overflow](http://stackoverflow.com/questions/tagged/kubernetes) |
Server Fault only
- Post questions (or answer questions) on [Stack Overflow](http://stackoverflow.com/questions/tagged/kubernetes) | |
- Post questions (or answer questions) on [Server Fault](https://serverfault.com/questions/tagged/kubernetes) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Further adjustments made based on the open discussions. There are a few still outstanding, and I kept those as unresolved.
For other items, I have resolved them for now, but please feel free to reopen them, or add a new comment 🙏
Edit: I resolved a few obvious ones, but decided to keep the open discussions unresolved, as it would be probably difficult to track how they were handled. I will be resolving them later on Thursday if I don't hear anything.
content/en/blog/_posts/2025-03-24-kubernetes-1.33-sneak-peek.md
Outdated
Show resolved
Hide resolved
content/en/blog/_posts/2025-03-24-kubernetes-1.33-sneak-peek.md
Outdated
Show resolved
Hide resolved
|
||
## Deprecations and removals for Kubernetes v1.33 | ||
|
||
### Deprecate v1.Endpoints ([KEP-4974](https://kep.k8s.io/4974)) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This has been updated with 9fe4dba
|
||
This deprecation only impacts only those who use the Endpoints API directly from workloads or scripts; these users should migrate to use EndpointSlices instead. You can find more about the deprecation implications and migration plans in a dedicated blog post [Endpoints formally deprecated in favor of EndpointSlices](TBC). | ||
|
||
### Deprecate `status.nodeInfo.kubeProxyVersion` field ([KEP-4004](https://kep.k8s.io/4004)) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This has been updated with 7a100b2
|
||
Following its deprecation in v1.31, as highlighted in the [release announcement](/blog/2024/07/19/kubernetes-1-31-upcoming-changes/#deprecation-of-status-nodeinfo-kubeproxyversion-field-for-nodes-kep-4004-https-github-com-kubernetes-enhancements-issues-4004), the `.status.nodeInfo.kubeProxyVersion` field will be removed in v1.33. This field was set by kubelet, but its value was not consistently accurate. As it has been disabled by default since v1.31, the v1.33 release will remove this field entirely. | ||
|
||
### Host network support for Windows pods ([KEP-3503](https://kep.k8s.io/3503)) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This has been updated with db248bc
|
||
### DRA’s ResourceClaim Device Status graduates to beta ([KEP-4817](https://kep.k8s.io/4817)) | ||
|
||
The `Devices` field in `ResourceClaim.Status`, introduced in v1.32, graduates to beta in v1.33. This field allows drivers to report device status data, improving both observability and troubleshooting capabilities. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The wording has been updated with the above snippet with f18e1a4
This KEP introduces a more structured deletion process for Kubernetes namespaces to ensure secure and deterministic resource removal. The current semi-random deletion order can create security gaps or unintended behaviour, such as Pods persisting after their associated NetworkPolicies are deleted. By enforcing a structured deletion sequence that respects logical and security dependencies, this approach ensures Pods are removed before other resources. The design improves Kubernetes’s security and reliability by mitigating risks associated with non-deterministic deletions. | ||
|
||
|
||
### Enhancements to Kubernetes Job Management and Persistent Volume Policies ([KEP-3850](https://kep.k8s.io/3850), [KEP-3998](https://kep.k8s.io/3998), [KEP-2644](https://kep.k8s.io/2644)) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Updated the wording with 198fb58
This KEP introduces a more structured deletion process for Kubernetes namespace to ensure secure and deterministic resource removal. The current semi-random deletion order can create security gaps or unintended behaviour, such as Pods persisting after their associated NetworkPolicies are deleted. By enforcing a structured deletion sequence that respects logical and security dependencies, this approach ensures Pods are removed before other resources. The design improves Kubernetes’s security and reliability by mitigating risks associated with non-deterministic deletions. | ||
|
||
|
||
### Enhancements to Kubernetes Job Management and Persistent Volume Policies ([KEP-3850](https://kep.k8s.io/3850), [KEP-3998](https://kep.k8s.io/3998), [KEP-2644](https://kep.k8s.io/2644)) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Along with the wording update, I have moved the KEP links from the heading with d9da762
content/en/blog/_posts/2025-03-24-kubernetes-1.33-sneak-peek.md
Outdated
Show resolved
Hide resolved
content/en/blog/_posts/2025-03-24-kubernetes-1.33-sneak-peek.md
Outdated
Show resolved
Hide resolved
@npolshakova @gracenng Do you have a suggestion where to place the planned release date? |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I tried to suggest improvements to open comments.
I hope it could be helpful.
|
||
* Beta or pre-release API versions must be supported for 3 releases after the deprecation. | ||
|
||
* Alpha or experimental API versions may be removed in any release without prior deprecation notice; this process can become a withdrawal in cases where a different implementation for the same feature is already in place. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This sentence is OK by my side.
|
||
### In-Place vertical Pod scalability with mutable PodSpec for resources ([KEP-1287](https://kep.k8s.io/1287)) | ||
|
||
When provisioning a Pod, you can use various resources such as Deployment, StatefulSet, etc. Scalability requirements may need horizontal scaling by updating the Pod replica count, or vertical scaling by updating resources allocated to Pod’s container(s). Currently, As PodSpec’s Container Resources are immutable, updating Pod’s container resources results in Pod restarts. But what if we could dynamically update the resource configuration for Pods without restarting them? |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Your last suggestion looks good. Better spec.containers.resources
.
If you want to make it readable is can suggest something like:
When provisioning a Pod, you can use various resources such as Deployment, StatefulSet, etc. Scalability requirements may need horizontal scaling by updating the Pod replica count, or vertical scaling by updating resources allocated to Pod’s container(s). Currently, As PodSpec’s Container Resources are immutable, updating Pod’s container resources results in Pod restarts. But what if we could dynamically update the resource configuration for Pods without restarting them? | |
When provisioning a Pod, you can use various resources such as Deployment, StatefulSet, etc. Scalability requirements may need horizontal scaling by updating the Pod replica count, or vertical scaling by updating resources allocated to Pod’s container(s). Currently, since container resources defined in Pod's `spec` are immutable, updating any of them results in Pod restarts. But what if we could dynamically update the resource configuration for Pods without restarting them? |
- Follow us on Bluesky [@Kubernetesio](https://bsky.app/profile/kubernetes.io) for the latest updates | ||
- Join the community discussion on [Discuss](https://discuss.kubernetes.io/) | ||
- Join the community on [Slack](http://slack.k8s.io/) | ||
- Post questions (or answer questions) on [Stack Overflow](http://stackoverflow.com/questions/tagged/kubernetes) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
In this article (https://kubernetes.io/blog/2024/12/11/kubernetes-v1-32-release/) we keep Stack Overflow, so I think that having both is not a problem.
/label tide/merge-method-squash |
Co-authored-by: Dipesh Rawat <rawat.dipesh@gmail.com> Co-authored-by: Grace Nguyen <42276283+gracenng@users.noreply.github.com> Co-authored-by: Graziano Casto <graziano.casto@outlook.com> Co-authored-by: Kat Cosgrove <kat.cosgrove@gmail.com> Co-authored-by: Nina Polshakova <nina.polshakova@solo.io> Co-authored-by: Ritika <52399571+Ritikaa96@users.noreply.github.com> Co-authored-by: Tim Bannister <tim+github@scalefactory.com>
/remove-label tide/merge-method-squash Release comms, or @graz-dev (as an active participant in blog work): one of you can now send in a new small PR to mark this article as not-draft @katcosgrove @natalisucks if you'd like to send that PR in, that's good too. |
LGTM label has been added. Git tree hash: 6b2051226191d3811d5e5ee07f12dbd77c409c7a
|
[APPROVALNOTIFIER] This PR is APPROVED This pull-request has been approved by: npolshakova, sftim The full list of commands accepted by this bot can be found here. The pull request process is described here
Needs approval from an approver in each of these files:
Approvers can indicate their approval by writing |
@sftim I can do it by the end of the day! 😉 |
Add
2025-03-24-kubernetes-1.33-sneak-peek.md
Preview link: https://deploy-preview-50111--kubernetes-io-main-staging.netlify.app/blog/2025/03/24/kubernetes-v1-33-upcoming-changes/