Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

KEP-4781: Fix inconsistent container start and ready state after kubelet restart #4784

Open
wants to merge 4 commits into
base: master
Choose a base branch
from
Open
Show file tree
Hide file tree
Changes from 1 commit
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
Next Next commit
add KEP4781
  • Loading branch information
pololowww committed Aug 23, 2024
commit 8a625c430f616426ba3b04a8164ff2132a0dc60e
Original file line number Diff line number Diff line change
Expand Up @@ -299,7 +299,7 @@ newReplicaSetReplicas = replicasBeforeScale * \frac{deploymentMaxReplicas}{deplo
$$

This is currently done in the [getReplicaSetFraction](https://github.com/kubernetes/kubernetes/blob/1cfaa95cab0f69ecc62ad9923eec2ba15f01fc2a/pkg/controller/deployment/util/deployment_util.go#L492-L512)
function. The leftover pods are added to the newest ReplicaSet.
function. The leftover pods are added to the largest ReplicaSet (or newest if more than one ReplicaSet has the largest number of pods).

This results in the following scaling behavior.

Expand Down Expand Up @@ -364,7 +364,7 @@ As we can see, we will get a slightly different result when compared to the firs
due to the consecutive scales and the fact that the last scale is not yet fully completed.

The consecutive partial scaling behavior is a best effort. We still adhere to all deployment
constraints and have a bias toward scaling the newest ReplicaSet. To implement this properly we
constraints and have a bias toward scaling the largest ReplicaSet. To implement this properly we
would have to introduce a full scaling history, which is probably not worth the added complexity.

### kubectl Changes
Expand Down
Loading