-
Couldn't load subscription status.
- Fork 8
feat: adds failure domain api for AWS and EKS #1347
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Conversation
| - name: workerConfig | ||
| value: | ||
| aws: | ||
| failureDomain: us-west-2a |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Were you able to verify this manually for EKS? From the docs it uses failureDomain: "1"
https://cluster-api-aws.sigs.k8s.io/topics/failure-domains/worker-nodes#failure-domains-in-worker-nodes
But in the CAPA code it does look like it should be the AZ like you have it documented
Co-authored-by: Dimitri Koshkin <dimitri.koshkin@gmail.com>
| @@ -0,0 +1,73 @@ | |||
| +++ | |||
| title = "AWS Failure Domain" | |||
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
| title = "AWS Failure Domain" | |
| title = "EKS Failure Domain" |
| variables: | ||
| - name: workerConfig | ||
| value: | ||
| aws: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
| aws: | |
| eks: |
| overrides: | ||
| - name: workerConfig | ||
| value: | ||
| aws: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
| aws: | |
| eks: |
| overrides: | ||
| - name: workerConfig | ||
| value: | ||
| aws: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
| aws: | |
| eks: |
| overrides: | ||
| - name: workerConfig | ||
| value: | ||
| aws: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
| aws: | |
| eks: |
| if obj.GetKind() != "MachineDeployment" || obj.GetAPIVersion() != clusterv1.GroupVersion.String() { | ||
| log.V(5).Info("not a MachineDeployment, skipping") | ||
| return nil | ||
| } | ||
|
|
||
| log.WithValues( | ||
| "patchedObjectKind", obj.GetKind(), | ||
| "patchedObjectName", client.ObjectKeyFromObject(obj), | ||
| ).Info("setting failure domain in worker MachineDeployment spec") | ||
|
|
||
| if err := unstructured.SetNestedField( | ||
| obj.Object, | ||
| failureDomainVar, | ||
| "spec", "template", "spec", "failureDomain", | ||
| ); err != nil { | ||
| return err |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We should be using patches.MutateIfApplicable instead for consistency between all the handlers and use concrete types instead of unstructured
| variableFieldPath []string | ||
| } | ||
|
|
||
| func NewWorkerPatch() *awsFailureDomainWorkerPatchHandler { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We should add failure domain patch for AWS control plane too.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
suggested few changes.
**What problem does this PR solve?**:
adds failure domians api to AWS/EKS Machine deployments
Which issue(s) this PR fixes:
https://jira.nutanix.com/browse/NCN-110350
How Has This Been Tested?:
Special notes for your reviewer: