Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Dynamic schema rfc #368

Merged
merged 8 commits into from
Jun 29, 2024
Merged
Show file tree
Hide file tree
Changes from 6 commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
15 changes: 15 additions & 0 deletions docs.md
Original file line number Diff line number Diff line change
Expand Up @@ -349,6 +349,21 @@ When a UserAttribute is updated, the following checks take place:
- If set, `disableAfter` must be zero or a positive duration (e.g. `240h`).
- If set, `deleteAfter` must be zero or a positive duration (e.g. `240h`).

# provisioning.cattle.io/v1

## Cluster

### Mutation Checks

#### On Update

##### Dynamic Schema Drop

Check for the presence of the `provisioning.cattle.io/allow-dynamic-schema-drop` annotation. If the value is `"true"`,
perform no mutations. If the value is not present or not `"true"`, compare the value of the `dynamicSchemaSpec` field
for each `machinePool`, to its' previous value. If the values are not identical, revert the value for the
`dynamicSchemaSpec` for the specific `machinePool`, but do not reject the request.

# rbac.authorization.k8s.io/v1

## ClusterRole
Expand Down
2 changes: 2 additions & 0 deletions go.mod
Original file line number Diff line number Diff line change
Expand Up @@ -18,6 +18,8 @@ replace (
k8s.io/controller-manager => k8s.io/controller-manager v0.30.1
k8s.io/cri-api => k8s.io/cri-api v0.30.1
k8s.io/csi-translation-lib => k8s.io/csi-translation-lib v0.30.1
k8s.io/dynamic-resource-allocation => k8s.io/dynamic-resource-allocation v0.30.1
k8s.io/endpointslice => k8s.io/endpointslice v0.30.1
k8s.io/kube-aggregator => k8s.io/kube-aggregator v0.30.1
k8s.io/kube-controller-manager => k8s.io/kube-controller-manager v0.30.1
k8s.io/kube-proxy => k8s.io/kube-proxy v0.30.1
Expand Down
10 changes: 10 additions & 0 deletions pkg/resources/provisioning.cattle.io/v1/cluster/Cluster.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,10 @@
## Mutation Checks

### On Update

#### Dynamic Schema Drop

Check for the presence of the `provisioning.cattle.io/allow-dynamic-schema-drop` annotation. If the value is `"true"`,
perform no mutations. If the value is not present or not `"true"`, compare the value of the `dynamicSchemaSpec` field
for each `machinePool`, to its' previous value. If the values are not identical, revert the value for the
`dynamicSchemaSpec` for the specific `machinePool`, but do not reject the request.
54 changes: 48 additions & 6 deletions pkg/resources/provisioning.cattle.io/v1/cluster/mutator.go
Original file line number Diff line number Diff line change
Expand Up @@ -42,11 +42,12 @@ const (
// MountPath is where the admission control configuration file will be mounted in the control plane nodes
mountPath = "/etc/rancher/%s/config/rancher-psact.yaml"

controlPlaneRoleLabel = "rke.cattle.io/control-plane-role"
secretAnnotation = "rke.cattle.io/object-authorized-for-clusters"
runtimeK3S = "k3s"
runtimeRKE2 = "rke2"
runtimeRKE = "rke"
controlPlaneRoleLabel = "rke.cattle.io/control-plane-role"
secretAnnotation = "rke.cattle.io/object-authorized-for-clusters"
allowDynamicSchemaDropAnnotation = "provisioning.cattle.io/allow-dynamic-schema-drop"
runtimeK3S = "k3s"
runtimeRKE2 = "rke2"
runtimeRKE = "rke"
)

var (
Expand Down Expand Up @@ -102,7 +103,7 @@ func (m *ProvisioningClusterMutator) Admit(request *admission.Request) (*admissi
listTrace := trace.New("provisioningCluster Admit", trace.Field{Key: "user", Value: request.UserInfo.Username})
defer listTrace.LogIfLong(admission.SlowTraceDuration)

cluster, err := objectsv1.ClusterFromRequest(&request.AdmissionRequest)
cluster, oldCluster, err := objectsv1.ClusterOldAndNewFromRequest(&request.AdmissionRequest)
MbolotSuse marked this conversation as resolved.
Show resolved Hide resolved
if err != nil {
return nil, err
}
Expand Down Expand Up @@ -133,13 +134,54 @@ func (m *ProvisioningClusterMutator) Admit(request *admission.Request) (*admissi
return response, nil
}

if request.Operation == admissionv1.Update {
response, err = m.handleDynamicSchemaDrop(request, oldCluster, cluster)
if err != nil {
return nil, fmt.Errorf("unable to evaluate dynamic schema drop, %w", err)
}
if response.Result != nil {
return response, nil
}
}

response.Allowed = true
if err = patch.CreatePatch(clusterJSON, cluster, response); err != nil {
return nil, fmt.Errorf("failed to create patch: %w", err)
}
return response, nil
}

// handleDynamicSchemaDrop watches for provisioning cluster updates, and reinserts the previous value of the
// dynamicSchemaSpec field for a machine pool if the "provisioning.cattle.io/allow-dynamic-schema-drop" annotation is
// not present and true on the cluster. If the value of the annotation is true, no mutation is performed.
func (m *ProvisioningClusterMutator) handleDynamicSchemaDrop(request *admission.Request, oldCluster, cluster *v1.Cluster) (*admissionv1.AdmissionResponse, error) {
MbolotSuse marked this conversation as resolved.
Show resolved Hide resolved
if cluster.Name == "local" || cluster.Spec.RKEConfig == nil {
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

we should make sure the harvester team isn't somehow relying on dynamic schema for the local cluster in rancherd

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

i don't think we are leveraging dynamic schema on local cluster for anything during the harvester bootstrap / lifecycle management. @bk201 @Vicente-Cheng

Copy link
Contributor Author

@jakefhyde jakefhyde Jun 26, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Sounds good then, as long as harvester isn't worried about the dynamic schema changing with new flags and causing reprovisioning for the local cluster (not sure if that's something that's been encountered previously, or if its even possible for rancherd to bootstrap the local cluster using harvester).

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@FrankYang0529 Please help take a look too.

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We don't use MachinePools field in RKEConfig, but we use RotateCertificates. Not sure whether it's related? Thanks.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@FrankYang0529 Yeah, not realted to any of the day 2 ops, only when new fields are added to a driver.

return admission.ResponseAllowed(), nil
}

if cluster.Annotations[allowDynamicSchemaDropAnnotation] == "true" {
return admission.ResponseAllowed(), nil
}

oldClusterPools := map[string]*v1.RKEMachinePool{}
for _, mp := range oldCluster.Spec.RKEConfig.MachinePools {
oldClusterPools[mp.Name] = &mp
}

for i, newPool := range cluster.Spec.RKEConfig.MachinePools {
oldPool, ok := oldClusterPools[newPool.Name]
if !ok {
logrus.Debugf("[%s] new machine pool: %s, skipping validation of dynamic schema spec", request.UID, newPool.Name)
continue
}
if oldPool.DynamicSchemaSpec != "" && newPool.DynamicSchemaSpec == "" {
logrus.Infof("provisioning cluster %s/%s machine pool %s dynamic schema spec mutated without supplying annotation %s, reverting", cluster.Namespace, cluster.Name, newPool.Name, allowDynamicSchemaDropAnnotation)
MbolotSuse marked this conversation as resolved.
Show resolved Hide resolved
cluster.Spec.RKEConfig.MachinePools[i].DynamicSchemaSpec = oldPool.DynamicSchemaSpec
}
}
return admission.ResponseAllowed(), nil
}

// handlePSACT updates the cluster and an underlying secret to support PSACT.
// If a PSACT is set in the cluster, handlePSACT generates an admission configuration file, mounts the file into a secret,
// updates the cluster's spec to mount the secret to the control plane nodes, and configures kube-apisever to use the admission configuration file;
Expand Down
Loading