-
Notifications
You must be signed in to change notification settings - Fork 254
HIVE-2302, HIVE-2644: Pass metadata.json through opaquely #2729
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: master
Are you sure you want to change the base?
HIVE-2302, HIVE-2644: Pass metadata.json through opaquely #2729
Conversation
@2uasimojo: This pull request references HIVE-2302 which is a valid jira issue. In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the openshift-eng/jira-lifecycle-plugin repository. |
Skipping CI for Draft Pull Request. |
@2uasimojo: This pull request references HIVE-2302 which is a valid jira issue. In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the openshift-eng/jira-lifecycle-plugin repository. |
[APPROVALNOTIFIER] This PR is APPROVED This pull-request has been approved by: 2uasimojo The full list of commands accepted by this bot can be found here. The pull request process is described here
Needs approval from an approver in each of these files:
Approvers can indicate their approval by writing |
TODO: Deprovisioner side |
8707a79
to
d09e43e
Compare
@2uasimojo: This pull request references HIVE-2302 which is a valid jira issue. In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the openshift-eng/jira-lifecycle-plugin repository. |
1 similar comment
@2uasimojo: This pull request references HIVE-2302 which is a valid jira issue. In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the openshift-eng/jira-lifecycle-plugin repository. |
d09e43e
to
13ad458
Compare
13ad458
to
ace243e
Compare
@2uasimojo: This pull request references HIVE-2302 which is a valid jira issue. This pull request references HIVE-2644 which is a valid jira issue. In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the openshift-eng/jira-lifecycle-plugin repository. |
@2uasimojo: This pull request references HIVE-2302 which is a valid jira issue. This pull request references HIVE-2644 which is a valid jira issue. In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the openshift-eng/jira-lifecycle-plugin repository. |
ace243e
to
cf8610a
Compare
/test e2e e2e-azure e2e-gcp e2e-vsphere e2e-openstack 🤞 |
cf8610a
to
268d7cc
Compare
/test e2e-azure e2e-vsphere |
268d7cc
to
bce1629
Compare
/test e2e-vsphere |
bce1629
to
cb1b5d5
Compare
/hold for moar testings |
...so I think I'll just rebase to pick up that last one anyway. ...but I might as well wait until #2754 lands. |
cb1b5d5
to
f6f2464
Compare
/assign @dlom |
Legacy granny switch validated via #2759 ✓ |
Well, mostly. Previously any time installer added a field to metadata.json, we would need to evaluate and possibly add a bespoke field and code path for it to make sure it was supplied to the destroyer at deprovision time. With this change, we're offloading metadata.json verbatim (except in some cases we have to scrub/replace credentials fields -- see HIVE-2804 / openshift#2612) to a new Secret in the ClusterDeployment's namespace, referenced from a new field: ClusterDeployment.Spec.ClusterMetadata.MetadataJSONSecretRef. For legacy clusters -- those created before this change -- we attempt to retrofit the new Secret based on the legacy fields. This is best effort and may not always work. In the future (but not here!) instead of building the installer's ClusterMetadata structure for the destroyer with individual fields from the CD's ClusterMetadata, we'll unmarshal it directly from the contents of this Secret.
An earlier commit ensures that ClusterDeployments have an associated Secret containing the metadata.json emitted by the installer. This change adds a new generic destroyer via the (existing) `hiveutil deprovision` command that consumes this metadata.json to deprovision the cluster. This new behavior is the default, but we also include an escape hatch to run the platform-specific legacy destroyer by setting the following annotation on the ClusterDeployment: `hive.openshift.io/legacy-deprovision: "true"`
/retest hive-on-pull-request |
/retest hive-mce-26-on-pull-request |
f6f2464
to
8158d73
Compare
/retest hive-on-pull-request |
/retest hive-mce-26-on-pull-request |
@dlom: The
Use In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository. |
/test e2e-openstack Pre-actual-test IPI setup flake |
/test e2e-openstack Same again |
/test e2e-openstack Looks like provision timed out? |
@2uasimojo: all tests passed! Full PR test history. Your PR dashboard. Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository. I understand the commands that are listed here. |
} | ||
|
||
// TODO: Make a registry or interface for this | ||
var ConfigureCreds func(client.Client) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think this is fine, it's readable and makes sense
// If env vars are unset, the destroyer will fail organically. | ||
ConfigureCreds = func(c client.Client) { | ||
nutanixutil.ConfigureCreds(c) | ||
metadata.Nutanix.Username = os.Getenv(constants.NutanixUsernameEnvVar) | ||
metadata.Nutanix.Password = os.Getenv(constants.NutanixPasswordEnvVar) | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
// If env vars are unset, the destroyer will fail organically. | |
ConfigureCreds = func(c client.Client) { | |
nutanixutil.ConfigureCreds(c) | |
metadata.Nutanix.Username = os.Getenv(constants.NutanixUsernameEnvVar) | |
metadata.Nutanix.Password = os.Getenv(constants.NutanixPasswordEnvVar) | |
} | |
// If env vars are unset, the destroyer will fail organically. | |
ConfigureCreds = nutanixutil.ConfigureCreds | |
metadata.Nutanix.Username = os.Getenv(constants.NutanixUsernameEnvVar) | |
metadata.Nutanix.Password = os.Getenv(constants.NutanixPasswordEnvVar) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think you can just write it like this. The function is immediately invoked right below
// (They were there originally, but we scrubbed them for security.) | ||
// If env vars are unset, the destroyer will fail organically. | ||
ConfigureCreds = func(c client.Client) { | ||
vsphereutil.ConfigureCreds(c) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Same here. Nothing else inside the function is referencing this c
variable, I think it can just be directly hoisted up into the switch case
clusterMetadata *hivev1.ClusterMetadata, | ||
metadataJSON []byte, | ||
forceUpdate bool, | ||
logger log.FieldLogger) error { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
logger log.FieldLogger) error { | |
logger log.FieldLogger, | |
) error { |
Cosmetic only suggestion
Previously any time installer added a field to metadata.json, we would need to evaluate and possibly add a bespoke field and code path for it to make sure it was supplied to the destroyer at deprovision time.
With this change, we're offloading metadata.json verbatim (except in some cases we have to scrub/replace credentials fields -- see HIVE-2804 / #2612) to a new Secret in the ClusterDeployment's namespace, referenced from a new field: ClusterDeployment.Spec.ClusterMetadata.MetadataJSONSecretRef.
For legacy clusters -- those created before this change -- we attempt to retrofit the new Secret based on the legacy fields. This is best effort and may not always work.
This change then adds a new generic destroyer via the (existing)
hiveutil deprovision
command that consumes this metadata.json to deprovision the cluster.This new behavior is the default, but we also include an escape hatch to run the platform-specific legacy destroyer by setting the following annotation on the ClusterDeployment:
hive.openshift.io/legacy-deprovision: "true"