Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Kubernetes v1.33 Mid Cycle Sneak Peek Blog #50111

Merged
merged 1 commit into from
Mar 20, 2025

Conversation

rytswd
Copy link
Member

@rytswd rytswd commented Mar 16, 2025

@k8s-ci-robot k8s-ci-robot added the area/blog Issues or PRs related to the Kubernetes Blog subproject label Mar 16, 2025
@k8s-ci-robot k8s-ci-robot added language/en Issues or PRs related to English language cncf-cla: yes Indicates the PR's author has signed the CNCF CLA. size/L Denotes a PR that changes 100-499 lines, ignoring generated files. labels Mar 16, 2025
Copy link

netlify bot commented Mar 16, 2025

Pull request preview available for checking

Built without sensitive environment variables

Name Link
🔨 Latest commit 1e13e37
🔍 Latest deploy log https://app.netlify.com/sites/kubernetes-io-main-staging/deploys/67dbe9715e71d800081c573f
😎 Deploy Preview https://deploy-preview-50111--kubernetes-io-main-staging.netlify.app
📱 Preview on mobile
Toggle QR Code...

QR Code

Use your smartphone camera to open QR code link.

To edit notification comments on pull requests, go to your Netlify site configuration.

@rytswd
Copy link
Member Author

rytswd commented Mar 16, 2025

Sorry for the delay with the write-up, we have put together the initial draft which is now ready for review 🙇

We were working in a separate Google Doc, and it ended up with full of edits and comments, which may make it more difficult to review. I am keeping this a simple PR for the review, but if it would be beneficial to create a separate interactive doc, I can surely do that 👍

Ping @natalisucks @katcosgrove
CC Comms Team @aibarbetta @aakankshabhende @Udi-Hofesh @sn3hay
CC Release Leads @npolshakova @mbianchidev @Vyom-Yadav

/hold

@k8s-ci-robot k8s-ci-robot added the do-not-merge/hold Indicates that a PR should not merge because someone has issued a /hold command. label Mar 16, 2025
Copy link
Contributor

@graz-dev graz-dev left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@rytswd nice piece!
Just found probably a wrong link copy paste. 👍🏻

@rytswd
Copy link
Member Author

rytswd commented Mar 16, 2025

Thanks for the review @dipesh-rawat @graz-dev ! I have applied all the suggestions so far 👍

@rytswd rytswd requested review from dipesh-rawat and graz-dev March 16, 2025 19:20
Copy link
Contributor

@katcosgrove katcosgrove left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Some grammatical nits. Looks great otherwise!


### In-Place vertical Pod scalability with mutable PodSpec for resources ([KEP-1287](https://kep.k8s.io/1287))

When provisioning a Pod, you can use various resources such as Deployment, StatefulSet, etc. Scalability requirements may need horizontal scaling by updating the Pod replica count, or vertical scaling by updating resources allocated to Pod’s container(s). Currently, As PodSpec’s Container Resources are immutable, updating Pod’s container resources results in Pod restarts. But what if we could dynamically update the resource configuration for Pods without restarting them?
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
When provisioning a Pod, you can use various resources such as Deployment, StatefulSet, etc. Scalability requirements may need horizontal scaling by updating the Pod replica count, or vertical scaling by updating resources allocated to Pod’s container(s). Currently, As PodSpec’s Container Resources are immutable, updating Pod’s container resources results in Pod restarts. But what if we could dynamically update the resource configuration for Pods without restarting them?
When provisioning a Pod, you can use various resources such as Deployment, StatefulSet, etc. Scalability requirements may need horizontal scaling by updating the Pod replica count, or vertical scaling by updating resources allocated to a Pod’s container(s). Currently, As PodSpec’s Container Resources are immutable, updating a Pod’s container resources results in Pod restarts. But what if we could dynamically update the resource configuration for Pods without restarting them?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I find the singularity / plurality with "a Pod's container(s)" is quite confusing (and the original wording is already more complex than I like). What do you think updating this to something like the following instead?

Suggested change
When provisioning a Pod, you can use various resources such as Deployment, StatefulSet, etc. Scalability requirements may need horizontal scaling by updating the Pod replica count, or vertical scaling by updating resources allocated to Pod’s container(s). Currently, As PodSpec’s Container Resources are immutable, updating Pod’s container resources results in Pod restarts. But what if we could dynamically update the resource configuration for Pods without restarting them?
When provisioning a Pod, you can use various resources such as Deployment, StatefulSet, etc. Scalability requirements may need horizontal scaling by updating the Pod replica count, or vertical scaling by updating container resources allocated to the Pod. Currently, As PodSpec’s Container Resources are immutable, updating any of the Pod’s container resources results in Pod restarts. But what if we could dynamically update the resource configuration for Pods without restarting them?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Try not to write "PodSpec"; we prefer spec in backticks separate from Pod in UpperCamelCase.

PodSpec is mostly something you see either as part of the OpenAPI document or in the source code. People operating Kubernetes see spec and Pod within manifests and often wouldn't see PodSpec at all.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Makes sense, I took the KEP reference directly, but it surely sounds more user friendly to simply use spec.

Suggested change
When provisioning a Pod, you can use various resources such as Deployment, StatefulSet, etc. Scalability requirements may need horizontal scaling by updating the Pod replica count, or vertical scaling by updating resources allocated to Pod’s container(s). Currently, As PodSpec’s Container Resources are immutable, updating Pod’s container resources results in Pod restarts. But what if we could dynamically update the resource configuration for Pods without restarting them?
When provisioning a Pod, you can use various resources such as Deployment, StatefulSet, etc. Scalability requirements may need horizontal scaling by updating the Pod replica count, or vertical scaling by updating container resources allocated to the Pod. Currently, since Pod's `spec.containers.resources` are immutable, updating any of the Pod’s container resources results in Pod restarts. But what if we could dynamically update the resource configuration for Pods without restarting them?

I am not sure if spec.containers.resources is appropriate, though. I think it would be an overkill to do jq syntax of spec.containers[].resources[]?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Your last suggestion looks good. Better spec.containers.resources.
If you want to make it readable is can suggest something like:

Suggested change
When provisioning a Pod, you can use various resources such as Deployment, StatefulSet, etc. Scalability requirements may need horizontal scaling by updating the Pod replica count, or vertical scaling by updating resources allocated to Pod’s container(s). Currently, As PodSpec’s Container Resources are immutable, updating Pod’s container resources results in Pod restarts. But what if we could dynamically update the resource configuration for Pods without restarting them?
When provisioning a Pod, you can use various resources such as Deployment, StatefulSet, etc. Scalability requirements may need horizontal scaling by updating the Pod replica count, or vertical scaling by updating resources allocated to Pod’s container(s). Currently, since container resources defined in Pod's `spec` are immutable, updating any of them results in Pod restarts. But what if we could dynamically update the resource configuration for Pods without restarting them?

Copy link
Contributor

@graz-dev graz-dev left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@rytswd I added some comments to improve the readability also for unfamiliar readers.
Then I suggest some fixes to stay on track with other "sneak pakes" published for previous releases (see: https://kubernetes.io/blog/2024/11/08/kubernetes-1-32-upcoming-changes/)


The [EndpointSlices](/docs/concepts/services-networking/endpoint-slices/) API has been stable since v1.21, which effectively replaced the original Endpoints API. The original Endpoints API was simple and straightforward, but also posed some challenges when scaling to large numbers of network endpoints. There have been new Service features only added to EndpointSlices API such as dual-stack networking, making the original Endpoints API ready for deprecation.

This deprecation only impacts only those who use the Endpoints API directly from workloads or scripts; these users should migrate to use EndpointSlices instead. You can find more about the deprecation implications and migration plans in a dedicated blog post [Endpoints formally deprecated in favor of EndpointSlices](TBC).
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Are you planning to maintain this blog post or to have the "Endpoints formally deprecated in favor of EndpointSlices" before this piece is published? If not remove the reverence to a TBC blog post.

If the "Endpoints formally deprecated in favor of EndpointSlices" will be published before this one I think that the best option is:

Suggested change
This deprecation only impacts only those who use the Endpoints API directly from workloads or scripts; these users should migrate to use EndpointSlices instead. You can find more about the deprecation implications and migration plans in a dedicated blog post [Endpoints formally deprecated in favor of EndpointSlices](TBC).
This deprecation only impacts only those who use the Endpoints API directly from workloads or scripts; these users should migrate to use EndpointSlices instead. You can find more about the deprecation implications and migration plans in a [dedicated blog post](TBC).

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes, the idea is to have a dedicated blog post, before this one goes out. But that one is still in draft, and may be tight to get it released before the mid cycle blog goes out. I'll keep it as is for now, but will update according to your suggestion later (this PR is already on hold)


Following its deprecation in v1.31, as highlighted in the [release announcement](/blog/2024/07/19/kubernetes-1-31-upcoming-changes/#deprecation-of-status-nodeinfo-kubeproxyversion-field-for-nodes-kep-4004-https-github-com-kubernetes-enhancements-issues-4004), the `.status.nodeInfo.kubeProxyVersion` field will be removed in v1.33. This field was set by kubelet, but its value was not consistently accurate. As it has been disabled by default since v1.31, the v1.33 release will remove this field entirely.

### Host network support for Windows pods ([KEP-3503](https://kep.k8s.io/3503))
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
### Host network support for Windows pods ([KEP-3503](https://kep.k8s.io/3503))
### Host network support for Windows pods

Add the reference to the KEP in the paragraph instead of into the title.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
### Host network support for Windows pods ([KEP-3503](https://kep.k8s.io/3503))
### Removal of host network support for Windows pods

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ditto this will be updated in a separate commit

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This has been updated with db248bc


The following list of enhancements is likely to be included in the upcoming v1.33 release. This is not a commitment and the release content is subject to change.

### In-Place vertical Pod scalability with mutable PodSpec for resources ([KEP-1287](https://kep.k8s.io/1287))
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
### In-Place vertical Pod scalability with mutable PodSpec for resources ([KEP-1287](https://kep.k8s.io/1287))
### In-Place vertical Pod scalability with mutable PodSpec for resources

Add the reference to the KEP in the paragraph instead of into the title.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
### In-Place vertical Pod scalability with mutable PodSpec for resources ([KEP-1287](https://kep.k8s.io/1287))
### Improvements to in-place vertical scaling for Pods

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ditto I will update the wording here, and add KEP reference in the paragraph below.

There was a Feature Blog back in v1.27 when it made to alpha, and its title was "In-place Resource Resize for Kubernetes Pods". I think I'll write something similar, like "In-place resource resize for vertical scaling of Pods"?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This has been updated with 9e4ca8c

The [KEP-1287](https://kep.k8s.io/1287) is precisely to allow such in-place Pod updates. It opens up various possibilities of vertical scale-up for stateful processes without any downtime, seamless scale-down when the process has only little traffic, and even allocating larger resources during the startup and eventually lowering once the initial setup is complete. This has been released as alpha in v1.27, and is expected to land as beta in v1.33.


### DRA’s ResourceClaim Device Status graduates to beta ([KEP-4817](https://kep.k8s.io/4817))
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
### DRA’s ResourceClaim Device Status graduates to beta ([KEP-4817](https://kep.k8s.io/4817))
### DRA’s ResourceClaim Device Status graduates to beta

Add the reference to the KEP in the paragraph instead of into the title.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Watch out for implying that the graduation is definitely going to happen. We don't make promises in the mid-cycle blog unless SIG Architecture would confirm the promise has been made.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The release stage may be the most important piece of information for users, and I don't see how else we can highlight these beta / stable features. We have a disclaimer of how things could change before the actual release as well, so this shouldn't read as a promise, but what we think is worth highlighting given its high probability of making it a part of the release.

As the KEP is tracked for code freeze, should we keep this as is, and drop the whole section if the code freeze situation changes? Also, we could potentially make a note of this in the Release Announcement if the situation changed from the mid cycle blog.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The KEP link has been moved out from the heading with e0f4df1

You can find more information in [Dynamic Resource Allocation: ResourceClaim Device Status](/docs/concepts/scheduling-eviction/dynamic-resource-allocation/#resourceclaim-device-status).


### Ordered Namespace Deletion ([KEP-5080](https://kep.k8s.io/5080))
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
### Ordered Namespace Deletion ([KEP-5080](https://kep.k8s.io/5080))
### Ordered Namespace Deletion

Add the reference to the KEP in the paragraph instead of into the title.

This KEP introduces a more structured deletion process for Kubernetes namespace to ensure secure and deterministic resource removal. The current semi-random deletion order can create security gaps or unintended behaviour, such as Pods persisting after their associated NetworkPolicies are deleted. By enforcing a structured deletion sequence that respects logical and security dependencies, this approach ensures Pods are removed before other resources. The design improves Kubernetes’s security and reliability by mitigating risks associated with non-deterministic deletions.


### Enhancements to Kubernetes Job Management and Persistent Volume Policies ([KEP-3850](https://kep.k8s.io/3850), [KEP-3998](https://kep.k8s.io/3998), [KEP-2644](https://kep.k8s.io/2644))
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
### Enhancements to Kubernetes Job Management and Persistent Volume Policies ([KEP-3850](https://kep.k8s.io/3850), [KEP-3998](https://kep.k8s.io/3998), [KEP-2644](https://kep.k8s.io/2644))
### Enhancements to Kubernetes Job Management and Persistent Volume Policies

Add the reference to the KEP in the paragraph instead of into the title.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Along with the wording update, I have moved the KEP links from the heading with d9da762

Copy link
Member Author

@rytswd rytswd left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Took most of the suggestions, but a few things left as is for now


## Deprecations and removals for Kubernetes v1.33

### Deprecate v1.Endpoints ([KEP-4974](https://kep.k8s.io/4974))
Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm a bit torn about this one -- it is true that you can check the paragraph, and that's the point of having these blogs, making it easier for readers of any level to understand upcoming changes. But for those readers with technical understanding, it would be useful to check out the KEPs to find more.

This is my personal take, but I think KEP is such a great asset Kubernetes community has, and want to make it as accessible as possible. I could take this out from the title, and perhaps put it at the bottom of each section, saying something like "If you want to find more about this, read this KEP" -- what do you think?


The [EndpointSlices](/docs/concepts/services-networking/endpoint-slices/) API has been stable since v1.21, which effectively replaced the original Endpoints API. The original Endpoints API was simple and straightforward, but also posed some challenges when scaling to large numbers of network endpoints. There have been new Service features only added to EndpointSlices API such as dual-stack networking, making the original Endpoints API ready for deprecation.

This deprecation only impacts only those who use the Endpoints API directly from workloads or scripts; these users should migrate to use EndpointSlices instead. You can find more about the deprecation implications and migration plans in a dedicated blog post [Endpoints formally deprecated in favor of EndpointSlices](TBC).
Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes, the idea is to have a dedicated blog post, before this one goes out. But that one is still in draft, and may be tight to get it released before the mid cycle blog goes out. I'll keep it as is for now, but will update according to your suggestion later (this PR is already on hold)


Windows Pod networking aimed to achieve feature parity with Linux and provide better cluster density by allowing containers to use Node’s networking namespace. The original implementation landed as alpha with v1.26, but as it faced unexpected containerd behaviours, and alternative solutions were available, it has been decided that the KEP will be withdrawn and the code removed in v1.33.

## Sneak peek of Kubernetes v1.33
Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think I'd keep this as "sneak peek", because, at this point, we don't yet know if these changes would actually land in the v1.33.
"Upcoming changes" may be a good one, but I'm wondering if it loses a bit of fun sense?


### In-Place vertical Pod scalability with mutable PodSpec for resources ([KEP-1287](https://kep.k8s.io/1287))

When provisioning a Pod, you can use various resources such as Deployment, StatefulSet, etc. Scalability requirements may need horizontal scaling by updating the Pod replica count, or vertical scaling by updating resources allocated to Pod’s container(s). Currently, As PodSpec’s Container Resources are immutable, updating Pod’s container resources results in Pod restarts. But what if we could dynamically update the resource configuration for Pods without restarting them?
Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I find the singularity / plurality with "a Pod's container(s)" is quite confusing (and the original wording is already more complex than I like). What do you think updating this to something like the following instead?

Suggested change
When provisioning a Pod, you can use various resources such as Deployment, StatefulSet, etc. Scalability requirements may need horizontal scaling by updating the Pod replica count, or vertical scaling by updating resources allocated to Pod’s container(s). Currently, As PodSpec’s Container Resources are immutable, updating Pod’s container resources results in Pod restarts. But what if we could dynamically update the resource configuration for Pods without restarting them?
When provisioning a Pod, you can use various resources such as Deployment, StatefulSet, etc. Scalability requirements may need horizontal scaling by updating the Pod replica count, or vertical scaling by updating container resources allocated to the Pod. Currently, As PodSpec’s Container Resources are immutable, updating any of the Pod’s container resources results in Pod restarts. But what if we could dynamically update the resource configuration for Pods without restarting them?

@rytswd
Copy link
Member Author

rytswd commented Mar 18, 2025

I think I incorporated all of the suggestions so far, or left a comment to discuss further.

Please feel free to add more comments / suggestions as you find more!

Copy link
Member

@gracenng gracenng left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Looks great team!
One thing I'd add is the expected release date for v1.33 but that's it :)

Copy link
Contributor

@npolshakova npolshakova left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Looks good to me! Great job everyone!

Copy link
Contributor

@Ritikaa96 Ritikaa96 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Its looking great, just suggesting some small grammatical nits.


* Beta or pre-release API versions must be supported for 3 releases after the deprecation.

* Alpha or experimental API versions may be removed in any release without prior deprecation notice; this process can become a withdrawal in cases where a different implementation for the same feature is already in place.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
* Alpha or experimental API versions may be removed in any release without prior deprecation notice; this process can become a withdrawal in cases where a different implementation for the same feature is already in place.
* Alpha or experimental API versions may be removed in any release without prior deprecation notice; this process can become a withdrawal in cases where a different implementation for the same feature is already in place.

Can we simplify this line ? Does it mean if there is different implementation for the same feature already exists, then Alpha or experimental API versions may be removed in any release ?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The nature of alpha features is that they can be removed at any point, with the process called "withdrawal" rather than deprecation. My understanding is that it's essentially the same process, but given it's only alpha, there is no guarantee that it would be supported in future releases.

This wording is something I inherited from previous cycles, and while that's not a reason to keep things unchanged, I personally found this relatively clear and straightforward. I'm open for suggestions, though!

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This sentence is OK by my side.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

right. alright . thanks for your work and explanation.

Copy link
Contributor

@sftim sftim left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks. Here's some further feedback on behalf of the blog team.

I've added a small number of corrective comments on the existing review from @graz-dev but overall please do pay attention to Graziano's feedback - it looks appropriate and relevant.


## Deprecations and removals for Kubernetes v1.33

### Deprecate v1.Endpoints ([KEP-4974](https://kep.k8s.io/4974))
Copy link
Contributor

@sftim sftim Mar 18, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Try this:

Suggested change
### Deprecate v1.Endpoints ([KEP-4974](https://kep.k8s.io/4974))
## Deprecation of the stable Endpoints API

This is not really an enhancement, unlike some of the other things we're giving a sneak peek into.


This deprecation only impacts only those who use the Endpoints API directly from workloads or scripts; these users should migrate to use EndpointSlices instead. You can find more about the deprecation implications and migration plans in a dedicated blog post [Endpoints formally deprecated in favor of EndpointSlices](TBC).

### Deprecate `status.nodeInfo.kubeProxyVersion` field ([KEP-4004](https://kep.k8s.io/4004))
Copy link
Contributor

@sftim sftim Mar 18, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I agree with @graz-dev - omit the hyperlink here. Links within headings don't work well, although we do sometimes use them.

I'd write:

Suggested change
### Deprecate `status.nodeInfo.kubeProxyVersion` field ([KEP-4004](https://kep.k8s.io/4004))
### Removal of kube-proxy version information in node status

I'm afraid to point it out, but: the existing heading is almost misleading: people might think we're deprecating the field, not removing a deprecated field.


Following its deprecation in v1.31, as highlighted in the [release announcement](/blog/2024/07/19/kubernetes-1-31-upcoming-changes/#deprecation-of-status-nodeinfo-kubeproxyversion-field-for-nodes-kep-4004-https-github-com-kubernetes-enhancements-issues-4004), the `.status.nodeInfo.kubeProxyVersion` field will be removed in v1.33. This field was set by kubelet, but its value was not consistently accurate. As it has been disabled by default since v1.31, the v1.33 release will remove this field entirely.

### Host network support for Windows pods ([KEP-3503](https://kep.k8s.io/3503))
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
### Host network support for Windows pods ([KEP-3503](https://kep.k8s.io/3503))
### Removal of host network support for Windows pods


### DRA’s ResourceClaim Device Status graduates to beta ([KEP-4817](https://kep.k8s.io/4817))

The `Devices` field in `ResourceClaim.Status`, introduced in v1.32, graduates to beta in v1.33. This field allows drivers to report device status data, improving both observability and troubleshooting capabilities.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
The `Devices` field in `ResourceClaim.Status`, introduced in v1.32, graduates to beta in v1.33. This field allows drivers to report device status data, improving both observability and troubleshooting capabilities.
The `devices` field in with ResourceClaim `status`, originally introduced in the v1.32 release, is likely to graduate to beta in v1.33. This field allows drivers to report device status data, improving both observability and troubleshooting capabilities.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks, I will slightly update your suggestion with the following:

Suggested change
The `Devices` field in `ResourceClaim.Status`, introduced in v1.32, graduates to beta in v1.33. This field allows drivers to report device status data, improving both observability and troubleshooting capabilities.
The `devices` field in ResourceClaim `status`, originally introduced in the v1.32 release, is likely to graduate to beta in v1.33. This field allows drivers to report device status data, improving both observability and troubleshooting capabilities.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The wording has been updated with the above snippet with f18e1a4


### In-Place vertical Pod scalability with mutable PodSpec for resources ([KEP-1287](https://kep.k8s.io/1287))

When provisioning a Pod, you can use various resources such as Deployment, StatefulSet, etc. Scalability requirements may need horizontal scaling by updating the Pod replica count, or vertical scaling by updating resources allocated to Pod’s container(s). Currently, As PodSpec’s Container Resources are immutable, updating Pod’s container resources results in Pod restarts. But what if we could dynamically update the resource configuration for Pods without restarting them?
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Try not to write "PodSpec"; we prefer spec in backticks separate from Pod in UpperCamelCase.

PodSpec is mostly something you see either as part of the OpenAPI document or in the source code. People operating Kubernetes see spec and Pod within manifests and often wouldn't see PodSpec at all.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If you can wrap the Markdown source, that'll help localization teams.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Noted, thanks! I'll make the adjustment after handling other suggestions

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

(this wrapping can come post-merge or post-publication, any anyone from release comms can propose the change)

@rytswd
Copy link
Member Author

rytswd commented Mar 19, 2025

Thanks for all the comments, and sorry for not acting on them yet as I'm a bit swamped with personal matters at the moment 🙇 I'll make sure to handle all the actions by EOD today

Copy link
Member Author

@rytswd rytswd left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for all the reviews!

For the simplicity sake, I have replied to most of the discussions with singular first person view. But this has been a team effort to get everything in place, so I would like to make sure that credits go out to all the Comms team as well ⭐

As mentioned in some comments, I am following up with separate commits to do some more clean up and rewording.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Noted, thanks! I'll make the adjustment after handling other suggestions


* Beta or pre-release API versions must be supported for 3 releases after the deprecation.

* Alpha or experimental API versions may be removed in any release without prior deprecation notice; this process can become a withdrawal in cases where a different implementation for the same feature is already in place.
Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The nature of alpha features is that they can be removed at any point, with the process called "withdrawal" rather than deprecation. My understanding is that it's essentially the same process, but given it's only alpha, there is no guarantee that it would be supported in future releases.

This wording is something I inherited from previous cycles, and while that's not a reason to keep things unchanged, I personally found this relatively clear and straightforward. I'm open for suggestions, though!


## Deprecations and removals for Kubernetes v1.33

### Deprecate v1.Endpoints ([KEP-4974](https://kep.k8s.io/4974))
Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I got some clarification about the URL in heading is to be avoided in the SIG Docs call, as it can cause trouble for accessibility needs such as screen reader. The documentation style guide has a mention about this (which was apparently recently added), and I will make sure to comply with that guideline in a separate commit.
https://kubernetes.io/docs/contribute/style/style-guide/#headings

The original heading wording was based on the exact title of the KEP issue. This was intended to ensure the community will have a clear idea of which KEP we are talking about here, especially for deprecation.

I think I will update the title as you suggested, and add a note about the KEP at the end of the paragraph.


This deprecation only impacts only those who use the Endpoints API directly from workloads or scripts; these users should migrate to use EndpointSlices instead. You can find more about the deprecation implications and migration plans in a dedicated blog post [Endpoints formally deprecated in favor of EndpointSlices](TBC).

### Deprecate `status.nodeInfo.kubeProxyVersion` field ([KEP-4004](https://kep.k8s.io/4004))
Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

As mentioned in the other comment, the original heading is the exact copy of the KEP title. I will rephrase this, but also keep the KEP title somewhere in the paragraph below.

The [KEP-1287](https://kep.k8s.io/1287) is precisely to allow such in-place Pod updates. It opens up various possibilities of vertical scale-up for stateful processes without any downtime, seamless scale-down when the process has only little traffic, and even allocating larger resources during the startup and eventually lowering once the initial setup is complete. This has been released as alpha in v1.27, and is expected to land as beta in v1.33.


### DRA’s ResourceClaim Device Status graduates to beta ([KEP-4817](https://kep.k8s.io/4817))
Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The release stage may be the most important piece of information for users, and I don't see how else we can highlight these beta / stable features. We have a disclaimer of how things could change before the actual release as well, so this shouldn't read as a promise, but what we think is worth highlighting given its high probability of making it a part of the release.

As the KEP is tracked for code freeze, should we keep this as is, and drop the whole section if the code freeze situation changes? Also, we could potentially make a note of this in the Release Announcement if the situation changed from the mid cycle blog.


### DRA’s ResourceClaim Device Status graduates to beta ([KEP-4817](https://kep.k8s.io/4817))

The `Devices` field in `ResourceClaim.Status`, introduced in v1.32, graduates to beta in v1.33. This field allows drivers to report device status data, improving both observability and troubleshooting capabilities.
Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks, I will slightly update your suggestion with the following:

Suggested change
The `Devices` field in `ResourceClaim.Status`, introduced in v1.32, graduates to beta in v1.33. This field allows drivers to report device status data, improving both observability and troubleshooting capabilities.
The `devices` field in ResourceClaim `status`, originally introduced in the v1.32 release, is likely to graduate to beta in v1.33. This field allows drivers to report device status data, improving both observability and troubleshooting capabilities.

This KEP introduces a more structured deletion process for Kubernetes namespaces to ensure secure and deterministic resource removal. The current semi-random deletion order can create security gaps or unintended behaviour, such as Pods persisting after their associated NetworkPolicies are deleted. By enforcing a structured deletion sequence that respects logical and security dependencies, this approach ensures Pods are removed before other resources. The design improves Kubernetes’s security and reliability by mitigating risks associated with non-deterministic deletions.


### Enhancements to Kubernetes Job Management and Persistent Volume Policies ([KEP-3850](https://kep.k8s.io/3850), [KEP-3998](https://kep.k8s.io/3998), [KEP-2644](https://kep.k8s.io/2644))
Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Let me drop the PVC update here, and instead focus on update on Jobs only. The two KEPs are very closely related as they are both about indexed jobs, and that would probably flow more naturally.

- Follow us on Bluesky [@Kubernetesio](https://bsky.app/profile/kubernetes.io) for the latest updates
- Join the community discussion on [Discuss](https://discuss.kubernetes.io/)
- Join the community on [Slack](http://slack.k8s.io/)
- Post questions (or answer questions) on [Stack Overflow](http://stackoverflow.com/questions/tagged/kubernetes)
Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Oh good to know -- should we keep both? Or just Server Fault?

Both

Suggested change
- Post questions (or answer questions) on [Stack Overflow](http://stackoverflow.com/questions/tagged/kubernetes)
- Post questions (or answer questions) on [Server Fault](https://serverfault.com/questions/tagged/kubernetes) or [Stack Overflow](http://stackoverflow.com/questions/tagged/kubernetes)

Server Fault only

Suggested change
- Post questions (or answer questions) on [Stack Overflow](http://stackoverflow.com/questions/tagged/kubernetes)
- Post questions (or answer questions) on [Server Fault](https://serverfault.com/questions/tagged/kubernetes)

Copy link
Member Author

@rytswd rytswd left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Further adjustments made based on the open discussions. There are a few still outstanding, and I kept those as unresolved.

For other items, I have resolved them for now, but please feel free to reopen them, or add a new comment 🙏

Edit: I resolved a few obvious ones, but decided to keep the open discussions unresolved, as it would be probably difficult to track how they were handled. I will be resolving them later on Thursday if I don't hear anything.


## Deprecations and removals for Kubernetes v1.33

### Deprecate v1.Endpoints ([KEP-4974](https://kep.k8s.io/4974))
Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This has been updated with 9fe4dba


This deprecation only impacts only those who use the Endpoints API directly from workloads or scripts; these users should migrate to use EndpointSlices instead. You can find more about the deprecation implications and migration plans in a dedicated blog post [Endpoints formally deprecated in favor of EndpointSlices](TBC).

### Deprecate `status.nodeInfo.kubeProxyVersion` field ([KEP-4004](https://kep.k8s.io/4004))
Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This has been updated with 7a100b2


Following its deprecation in v1.31, as highlighted in the [release announcement](/blog/2024/07/19/kubernetes-1-31-upcoming-changes/#deprecation-of-status-nodeinfo-kubeproxyversion-field-for-nodes-kep-4004-https-github-com-kubernetes-enhancements-issues-4004), the `.status.nodeInfo.kubeProxyVersion` field will be removed in v1.33. This field was set by kubelet, but its value was not consistently accurate. As it has been disabled by default since v1.31, the v1.33 release will remove this field entirely.

### Host network support for Windows pods ([KEP-3503](https://kep.k8s.io/3503))
Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This has been updated with db248bc


### DRA’s ResourceClaim Device Status graduates to beta ([KEP-4817](https://kep.k8s.io/4817))

The `Devices` field in `ResourceClaim.Status`, introduced in v1.32, graduates to beta in v1.33. This field allows drivers to report device status data, improving both observability and troubleshooting capabilities.
Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The wording has been updated with the above snippet with f18e1a4

This KEP introduces a more structured deletion process for Kubernetes namespaces to ensure secure and deterministic resource removal. The current semi-random deletion order can create security gaps or unintended behaviour, such as Pods persisting after their associated NetworkPolicies are deleted. By enforcing a structured deletion sequence that respects logical and security dependencies, this approach ensures Pods are removed before other resources. The design improves Kubernetes’s security and reliability by mitigating risks associated with non-deterministic deletions.


### Enhancements to Kubernetes Job Management and Persistent Volume Policies ([KEP-3850](https://kep.k8s.io/3850), [KEP-3998](https://kep.k8s.io/3998), [KEP-2644](https://kep.k8s.io/2644))
Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Updated the wording with 198fb58

This KEP introduces a more structured deletion process for Kubernetes namespace to ensure secure and deterministic resource removal. The current semi-random deletion order can create security gaps or unintended behaviour, such as Pods persisting after their associated NetworkPolicies are deleted. By enforcing a structured deletion sequence that respects logical and security dependencies, this approach ensures Pods are removed before other resources. The design improves Kubernetes’s security and reliability by mitigating risks associated with non-deterministic deletions.


### Enhancements to Kubernetes Job Management and Persistent Volume Policies ([KEP-3850](https://kep.k8s.io/3850), [KEP-3998](https://kep.k8s.io/3998), [KEP-2644](https://kep.k8s.io/2644))
Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Along with the wording update, I have moved the KEP links from the heading with d9da762

@rytswd rytswd requested a review from Ritikaa96 March 20, 2025 03:11
@rytswd
Copy link
Member Author

rytswd commented Mar 20, 2025

@npolshakova @gracenng Do you have a suggestion where to place the planned release date?
I tried to add in "Want to know more?" section -- how does that look? (Attached is a contrived local preview)

image

@rytswd rytswd requested review from sftim and gracenng March 20, 2025 03:19
Copy link
Contributor

@graz-dev graz-dev left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I tried to suggest improvements to open comments.
I hope it could be helpful.


* Beta or pre-release API versions must be supported for 3 releases after the deprecation.

* Alpha or experimental API versions may be removed in any release without prior deprecation notice; this process can become a withdrawal in cases where a different implementation for the same feature is already in place.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This sentence is OK by my side.


### In-Place vertical Pod scalability with mutable PodSpec for resources ([KEP-1287](https://kep.k8s.io/1287))

When provisioning a Pod, you can use various resources such as Deployment, StatefulSet, etc. Scalability requirements may need horizontal scaling by updating the Pod replica count, or vertical scaling by updating resources allocated to Pod’s container(s). Currently, As PodSpec’s Container Resources are immutable, updating Pod’s container resources results in Pod restarts. But what if we could dynamically update the resource configuration for Pods without restarting them?
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Your last suggestion looks good. Better spec.containers.resources.
If you want to make it readable is can suggest something like:

Suggested change
When provisioning a Pod, you can use various resources such as Deployment, StatefulSet, etc. Scalability requirements may need horizontal scaling by updating the Pod replica count, or vertical scaling by updating resources allocated to Pod’s container(s). Currently, As PodSpec’s Container Resources are immutable, updating Pod’s container resources results in Pod restarts. But what if we could dynamically update the resource configuration for Pods without restarting them?
When provisioning a Pod, you can use various resources such as Deployment, StatefulSet, etc. Scalability requirements may need horizontal scaling by updating the Pod replica count, or vertical scaling by updating resources allocated to Pod’s container(s). Currently, since container resources defined in Pod's `spec` are immutable, updating any of them results in Pod restarts. But what if we could dynamically update the resource configuration for Pods without restarting them?

- Follow us on Bluesky [@Kubernetesio](https://bsky.app/profile/kubernetes.io) for the latest updates
- Join the community discussion on [Discuss](https://discuss.kubernetes.io/)
- Join the community on [Slack](http://slack.k8s.io/)
- Post questions (or answer questions) on [Stack Overflow](http://stackoverflow.com/questions/tagged/kubernetes)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

In this article (https://kubernetes.io/blog/2024/12/11/kubernetes-v1-32-release/) we keep Stack Overflow, so I think that having both is not a problem.

@sftim
Copy link
Contributor

sftim commented Mar 20, 2025

/label tide/merge-method-squash

@k8s-ci-robot k8s-ci-robot added the tide/merge-method-squash Denotes a PR that should be squashed by tide when it merges. label Mar 20, 2025
Co-authored-by: Dipesh Rawat <rawat.dipesh@gmail.com>
Co-authored-by: Grace Nguyen <42276283+gracenng@users.noreply.github.com>
Co-authored-by: Graziano Casto <graziano.casto@outlook.com>
Co-authored-by: Kat Cosgrove <kat.cosgrove@gmail.com>
Co-authored-by: Nina Polshakova <nina.polshakova@solo.io>
Co-authored-by: Ritika <52399571+Ritikaa96@users.noreply.github.com>
Co-authored-by: Tim Bannister <tim+github@scalefactory.com>
@sftim
Copy link
Contributor

sftim commented Mar 20, 2025

/remove-label tide/merge-method-squash
/lgtm
/approve
/hold cancel

Release comms, or @graz-dev (as an active participant in blog work): one of you can now send in a new small PR to mark this article as not-draft

@katcosgrove @natalisucks if you'd like to send that PR in, that's good too.

@k8s-ci-robot k8s-ci-robot removed do-not-merge/hold Indicates that a PR should not merge because someone has issued a /hold command. tide/merge-method-squash Denotes a PR that should be squashed by tide when it merges. labels Mar 20, 2025
@k8s-ci-robot k8s-ci-robot added the lgtm "Looks good to me", indicates that a PR is ready to be merged. label Mar 20, 2025
@k8s-ci-robot
Copy link
Contributor

LGTM label has been added.

Git tree hash: 6b2051226191d3811d5e5ee07f12dbd77c409c7a

@k8s-ci-robot
Copy link
Contributor

[APPROVALNOTIFIER] This PR is APPROVED

This pull-request has been approved by: npolshakova, sftim

The full list of commands accepted by this bot can be found here.

The pull request process is described here

Needs approval from an approver in each of these files:

Approvers can indicate their approval by writing /approve in a comment
Approvers can cancel approval by writing /approve cancel in a comment

@k8s-ci-robot k8s-ci-robot added the approved Indicates a PR has been approved by an approver from all required OWNERS files. label Mar 20, 2025
@sftim sftim dismissed Ritikaa96’s stale review March 20, 2025 10:12

Looks OK to merge as-is

@k8s-ci-robot k8s-ci-robot merged commit a7146da into kubernetes:main Mar 20, 2025
2 checks passed
@graz-dev
Copy link
Contributor

@sftim I can do it by the end of the day! 😉

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
approved Indicates a PR has been approved by an approver from all required OWNERS files. area/blog Issues or PRs related to the Kubernetes Blog subproject cncf-cla: yes Indicates the PR's author has signed the CNCF CLA. language/en Issues or PRs related to English language lgtm "Looks good to me", indicates that a PR is ready to be merged. size/L Denotes a PR that changes 100-499 lines, ignoring generated files.
Projects
None yet
Development

Successfully merging this pull request may close these issues.

9 participants