Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

feat: Supports upper-layer modification of the InstanceSet's UpdateStrategy #7939

Open
wants to merge 1 commit into
base: main
Choose a base branch
from

Conversation

YTGhost
Copy link
Contributor

@YTGhost YTGhost commented Aug 7, 2024

resolve #7913

@github-actions github-actions bot added the size/XL Denotes a PR that changes 500-999 lines. label Aug 7, 2024
@apecloud-bot apecloud-bot added the pre-approve Fork PR Pre Approve Test label Aug 7, 2024
@apecloud-bot apecloud-bot added pre-approve Fork PR Pre Approve Test and removed pre-approve Fork PR Pre Approve Test labels Aug 7, 2024
@apecloud-bot apecloud-bot added pre-approve Fork PR Pre Approve Test and removed pre-approve Fork PR Pre Approve Test labels Aug 7, 2024
// Partition are updated. All pods from ordinal Partition-1 to 0 remain untouched.
// This is helpful in being able to do a canary based deployment. The default value is 0.
// +optional
Partition *int32 `json:"partition,omitempty"`
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Do we need partition? @free6om

After introducing instance templates, the naming of Pods managed by an InstanceSet follows this pattern: "comp-0, comp-1, comp-tpl0-0, comp-tpl1-0, comp-tpl1-1"

Unlike before, the Pods no longer have a linear ordinal numbering scheme. This makes specifying a partition much more challenging.

Copy link
Contributor Author

@YTGhost YTGhost Aug 12, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes, I feel the same way. Currently, the partition feature allows Pods to be updated in a rolling update based on dictionary order from largest to smallest, but it seems there is no way to perform a rolling update on a specific Template at the moment.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

How we do a RollingUpdate then?

// That means if there is any unavailable pod in the range 0 to Replicas-1,
// it will be counted towards MaxUnavailable.
// +optional
MaxUnavailable *intstr.IntOrString `json:"maxUnavailable,omitempty"`
Copy link
Contributor

@weicao weicao Aug 10, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The term 'maxUnavailable' is misleading. In the original StatefulSet, updates always involved restarting pods, and any update to a pod would cause it to become unavailable. Therefore, controlling the number of maxUnavailable pods represented the level of concurrency.

However, in InstanceSet, if we continue to use 'maxUnavailable', strictly speaking, all non-restarting updates (i.e., those that don't cause the pod to become unavailable) would not be controlled by this parameter.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why should we control the non-restarting updates as they are taking effect instantly after the Pods being updated? Do all the non-restarting updates in one reconciliation loop seems no difference with doing them in several reconciliation loops.

Copy link
Contributor

@weicao weicao Aug 13, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Because 'non-restarting update' is also a change in production environment, it should be able to be controlled by a gradual upgrade strategy. For example, it's reasonable to allow users to make changes to a small number of replicas first, and after verification, gradually roll out to more replicas.

e.g. A 'non-restarting update' might involve parameter modifications, which could potentially lead to issues such as performance degradation. Or, it might be an IP whitelist change, which could inadvertently block legitimate traffic from apps.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

To summarize, incorrect 'non-restarting updates' can potentially harm business continuity. Therefore, users may require a gradual update process. This is something we need to consider.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@weicao I think, for the 'non-restarting update,' it seems that we don't have a way to automate the control. How about we use partition to manually control it instead?

Copy link
Contributor

@weicao weicao Aug 14, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@weicao I think, for the 'non-restarting update,' it seems that we don't have a way to automate the control. How about we use partition to manually control it instead?

When you have multiple instance templates, how do you plan to use partitions?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

What I mean is, say if you have 3 templates and a total of 10 replicas, when you set the partition to 7 (upgrading the 8th, 9th, and 10th replicas), it becomes difficult to determine which templates these 8th, 9th, and 10th replicas belong to. This makes it challenging to verify whether the updated replicas meet the expected outcomes.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

What I mean is, say if you have 3 templates and a total of 10 replicas, when you set the partition to 7 (upgrading the 8th, 9th, and 10th replicas), it becomes difficult to determine which templates these 8th, 9th, and 10th replicas belong to. This makes it challenging to verify whether the updated replicas meet the expected outcomes.

@weicao We have also considered this issue. However, based on our current requirements, we do not need to specify a separate partition for gray upgrades in multiple templates. Nevertheless, I understand the need for multi-template partitioning, and we can discuss and design a solution for this.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Currently, our requirement is only for the partition to be able to perform a global update in alphabetical order.

//
// +kubebuilder:validation:Enum={Serial,BestEffortParallel,Parallel}
// +optional
MemberUpdateStrategy *MemberUpdateStrategy `json:"memberUpdateStrategy,omitempty"`
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

'memberUpdateStrategy' and 'maxUnavailable' are not orthogonal.

  • When 'memberUpdateStrategy' is set to 'serial', 'maxUnavailable' has no effect, right?
  • When 'memberUpdateStrategy' is set to 'bestEffortParallel', the concurrency is calculated based on quorum, so 'maxUnavailable' should not have an effect either, right?
  • Therefore, does 'maxUnavailable' only take effect when 'memberUpdateStrategy' is set to 'parallel'?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes, but MaxUnavailable means no more than a total number of Pods defined by MaxAvailable should be unavailable. It's upper bound.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

so 'maxUnavailable' takes effect when 'memberUpdateStrategy' is set to 'bestEffortParallel' and 'parallel'?

@apecloud-bot apecloud-bot added pre-approve Fork PR Pre Approve Test and removed pre-approve Fork PR Pre Approve Test labels Aug 13, 2024
@apecloud-bot apecloud-bot added pre-approve Fork PR Pre Approve Test and removed pre-approve Fork PR Pre Approve Test labels Aug 14, 2024
@apecloud-bot apecloud-bot removed the pre-approve Fork PR Pre Approve Test label Aug 15, 2024
@apecloud-bot apecloud-bot added the pre-approve Fork PR Pre Approve Test label Aug 15, 2024
@apecloud-bot apecloud-bot added pre-approve Fork PR Pre Approve Test and removed pre-approve Fork PR Pre Approve Test labels Aug 15, 2024
…rategy

Signed-off-by: Liang Deng <283304489@qq.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
area/user-interaction pre-approve Fork PR Pre Approve Test size/XL Denotes a PR that changes 500-999 lines.
Projects
None yet
Development

Successfully merging this pull request may close these issues.

[Features] Supports upper-layer modification of the InstanceSet's UpdateStrategy
4 participants