-
Notifications
You must be signed in to change notification settings - Fork 15.2k
KEP5598: update docs for Opportunistic Batching feature #52899
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
👷 Deploy Preview for kubernetes-io-vnext-staging processing.
|
✅ Pull request preview available for checkingBuilt without sensitive environment variables
To edit notification comments on pull requests, go to your Netlify project configuration. |
|
Hello @dom4ha 👋, v1.35 Docs Team here again! We are closing in on the deadline to get your PR ready for review before Tuesday 18th November 2025, so I'm sending a second reminder. Please take a look at the Documenting for a release - PR Ready for Review document to get your PR ready for review before the deadline. Please also let us know once your PR is fully Ready for Review -- meaning all documentation updates are complete and it's awaiting reviewer feedback -- so we can update our tracking. Thank you! |
cad4cbc to
d96f8a0
Compare
|
Hello @dom4ha 👋! I'm reaching out from the Docs team. Just checking in as we approach Docs Freeze on 3rd December 2025, 12:00 UTC. |
d96f8a0 to
13d70c4
Compare
content/en/docs/concepts/scheduling-eviction/scheduler-perf-tuning.md
Outdated
Show resolved
Hide resolved
content/en/docs/concepts/scheduling-eviction/scheduler-perf-tuning.md
Outdated
Show resolved
Hide resolved
content/en/docs/concepts/scheduling-eviction/scheduler-perf-tuning.md
Outdated
Show resolved
Hide resolved
content/en/docs/concepts/scheduling-eviction/scheduler-perf-tuning.md
Outdated
Show resolved
Hide resolved
content/en/docs/concepts/scheduling-eviction/scheduler-perf-tuning.md
Outdated
Show resolved
Hide resolved
content/en/docs/concepts/scheduling-eviction/scheduler-perf-tuning.md
Outdated
Show resolved
Hide resolved
| 1. Scheduling profile needs to disable [default topology spread](/docs/concepts/scheduling-eviction/topology-spread-constraints/#internal-default-constraints) | ||
| 1. `IgnorePreferredTermsOfExistingPods` of [InterPodAffinityArgs](/docs/reference/config-api/kube-scheduler-config.v1/#kubescheduler-config-k8s-io-v1-InterPodAffinityArgs) | ||
| can be set to `true` to make the batching more efficient. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Let's explain those two separately? because they are "what a cluster admin has to configure their cluster", while others are "what kind of pods can get the benefits from this feature".
So, overall, I'd prefer the explanation to be like:
This feature takes effect only for specific pods for now:
- No topology spread constraints.
- No DRA.
- ...
Also, to enable this feature in your scheduler, you have to configure the scheduler to:
- Change `IgnorePreferredTermsOfExistingPods` to `true`
- Set empty on default topology spread.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Good point. Broke it down into 4 categories
|
/sig scheduling |
3426b75 to
157bd0c
Compare
| This feature takes effect only in specific conditions: | ||
| 1. Pods need to have equivalent scheduling constraints | ||
| 1. Pods need to be scheduled at the same time (cache expires afer 0.5s) | ||
| 1. Pods cannot be interleaved with other pods | ||
| 1. Only one batched pod can fit into one node (placing 2 pods on one node invalidates the cache) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I feel like it's easier to explain the basic flow first, and then explain these conditions without bullet points. Maybe like:
| This feature takes effect only in specific conditions: | |
| 1. Pods need to have equivalent scheduling constraints | |
| 1. Pods need to be scheduled at the same time (cache expires afer 0.5s) | |
| 1. Pods cannot be interleaved with other pods | |
| 1. Only one batched pod can fit into one node (placing 2 pods on one node invalidates the cache) | |
| Basically, this feature works like: | |
| 1. The scheduler schedules pod-1 and caches the scheduling result. | |
| 1. The scheduler schedules pod-2, 3, ... with the cached results. | |
| 1. The cache expires after 0.5 second. The scheduler schedules the next pod without cache. | |
| Pods with equivalent scheduling constraints have to come to the scheduling cycle back to back. When the scheduler schedules a pod with different constraints, the cache is flushed at that point. |
Also, I'd move 1. Only one batched pod can fit into one node (placing 2 pods on one node invalidates the cache) to the next section.
| And specific pods that do not use: | ||
| 1. Inter pod affinity/anti-affinity | ||
| 1. Topology spread constraints | ||
| 1. DRA (cannot have any Resource Claims) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
| And specific pods that do not use: | |
| 1. Inter pod affinity/anti-affinity | |
| 1. Topology spread constraints | |
| 1. DRA (cannot have any Resource Claims) | |
| We apply this batching scheduling to specific pods that: | |
| 1. don't have inter pod affinity/anti-affinity | |
| 1. don't have tpology spread constraints | |
| 1. don't have DRA (i.e., don't have any Resource Claims) | |
| 1. scheduled exclusively on nodes (i.e., placing more than one pods on one node invalidates the cache) |
b7fd102 to
f80338e
Compare
sanposhiho
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
/lgtm
/approve
sig-scheduling tech review
|
LGTM label has been added. DetailsGit tree hash: 384e2eea1d5e29c88040d70e1e6dcdc2bb07517c |
natalisucks
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
review from docs, some nits, some required – thanks for this PR!
content/en/docs/concepts/scheduling-eviction/scheduler-perf-tuning.md
Outdated
Show resolved
Hide resolved
content/en/docs/concepts/scheduling-eviction/scheduler-perf-tuning.md
Outdated
Show resolved
Hide resolved
content/en/docs/concepts/scheduling-eviction/scheduler-perf-tuning.md
Outdated
Show resolved
Hide resolved
f80338e to
97116b1
Compare
|
I applied all your comments. |
divya-mohan0209
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM from the docs side. Also, re-adding the approval from SIG Scheduling since that was removed in a previous commit due to stylistic changes suggested from our side.
/approve
/lgtm
|
LGTM label has been added. DetailsGit tree hash: 7247c1f099688ddc06995063030fbbf7bb33aac2 |
|
[APPROVALNOTIFIER] This PR is APPROVED This pull-request has been approved by: divya-mohan0209, sanposhiho The full list of commands accepted by this bot can be found here. The pull request process is described here DetailsNeeds approval from an approver in each of these files:
Approvers can indicate their approval by writing |
Description
Issue
Enhancements issue: kubernetes/enhancements#5598