Skip to content

Commit

Permalink
Proofread files (gardener#7483)
Browse files Browse the repository at this point in the history
  • Loading branch information
n-boshnakov authored Feb 14, 2023
1 parent ac85ac2 commit c7ec5ac
Show file tree
Hide file tree
Showing 16 changed files with 440 additions and 428 deletions.
212 changes: 107 additions & 105 deletions docs/proposals/01-extensibility.md

Large diffs are not rendered by default.

139 changes: 70 additions & 69 deletions docs/proposals/02-backupinfra.md

Large diffs are not rendered by default.

22 changes: 11 additions & 11 deletions docs/proposals/03-networking-extensibility.md
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
# Network Extensibility
# Networking Extensibility

Currently Gardener follows a mono network-plugin support model (i.e., Calico). Although this can seem to be the more stable approach, it does not completely reflect the real use-case. This proposal brings forth an effort to add an extra level of customizability to Gardener networking.
Currently, Gardener follows a mono network-plugin support model (i.e., Calico). Although this can seem to be the more stable approach, it does not completely reflect the real use-case. This proposal brings forth an effort to add an extra level of customizability to Gardener networking.

## Motivation

Expand All @@ -11,13 +11,13 @@ Gardener is an open-source project that provides a nested user model. Basically,

For the first set of users, the choice of network plugin might not be so important, however, for the second class of users (i.e., Hosted) it is important to be able to customize networking based on their needs.

Furthermore, Gardener provisions clusters on different cloud-providers with different networking requirements. For example, Azure does not support Calico Networking [1], this leads to the introduction of manual exceptions in static add-on charts which is error prune and can lead to failures during upgrades.
Furthermore, Gardener provisions clusters on different cloud-providers with different networking requirements. For example, Azure does not support Calico Networking [1], this leads to the introduction of manual exceptions in static add-on charts which is error prone and can lead to failures during upgrades.

Finally, every provider is different, and thus the network always needs to adapt to the infrastructure needs to provider better performance. Consistency does not necessarily lie in the implementation but in the interface.
Finally, every provider is different, and thus the network always needs to adapt to the infrastructure needs to provide better performance. Consistency does not necessarily lie in the implementation but in the interface.

## Gardener Network Extension

The goal of the Gardener Network Extensions is to support different network plugin, therefore, the specification for the network resource won't be fixed and will be customized based on the underlying network plugin. To do so, a `NetworkConfig` field in the spec will be provided where each plugin will define. Below is an example for deploy Calico as the cluster network plugin.
The goal of the Gardener Network Extensions is to support different network plugin, therefore, the specification for the network resource won't be fixed and will be customized based on the underlying network plugin. To do so, a `NetworkConfig` field in the spec will be provided where each plugin will be defined. Below is an example for deploying Calico as the cluster network plugin.


### Long Term Spec
Expand Down Expand Up @@ -68,12 +68,12 @@ status:
### First Implementation (Short Term)
As an initial implementation the network plugin type will be specified by the user e.g., Calico (without further configuration in the provider spec). This will then be used to generate
the `Network` resource in the seed. The Network operator will pick it up, and apply the configuration based on the `spec.cloudProvider` specified directly to the shoot or via the
Gardener resource manager (still in the works).
As an initial implementation, the network plugin type will be specified by the user, e.g. Calico (without further configuration in the
provider spec). This will then be used to generate the `Network` resource in the seed. The Network operator will pick it up, and apply the
configuration based on the `spec.cloudProvider` specified directly to the shoot or via the Gardener resource manager (still in the works).

The `cloudProvider` field in the spec is just an initial catalyst but not meant to be stay long-term. In the future, the network provider configuration will be customized to match the best
needs of the infrastructure.
The `cloudProvider` field in the spec is just an initial catalyst but not meant to stay long-term. In the future,
the network provider configuration will be customized to match the best needs of the infrastructure.

Here is how the simplified initial spec would look like:

Expand All @@ -98,7 +98,7 @@ status:

The network resource need to be created early-on during cluster provisioning. Once created, the Network operator residing in every seed will create all the necessary networking resources and apply them to the shoot cluster.

The status of the Network resource should reflect the health of the networking components as well as additional tests if required.
The status of the Network resource should reflect the health of the networking components, as well as additional tests if required.

## References

Expand Down
38 changes: 19 additions & 19 deletions docs/proposals/04-new-core-gardener-cloud-apis.md
Original file line number Diff line number Diff line change
@@ -1,26 +1,26 @@
---
title: New Core Gardener Cloud APIs
title: 03 New Core Gardener Cloud APIs
---

# New `core.gardener.cloud/v1beta1` APIs required to extract cloud-specific/OS-specific knowledge out of Gardener core
# New `core.gardener.cloud/v1beta1` APIs Required to Extract Cloud-Specific/OS-Specific Knowledge Out of Gardener Core

## Table of Contents

- [New `core.gardener.cloud/v1beta1` APIs required to extract cloud-specific/OS-specific knowledge out of Gardener core](#new-coregardenercloudv1beta1-apis-required-to-extract-cloud-specificos-specific-knowledge-out-of-gardener-core)
- [New `core.gardener.cloud/v1beta1` APIs Required to Extract Cloud-Specific/OS-Specific Knowledge Out of Gardener Core](#new-coregardenercloudv1beta1-apis-required-to-extract-cloud-specificos-specific-knowledge-out-of-gardener-core)
- [Table of Contents](#table-of-contents)
- [Summary](#summary)
- [Motivation](#motivation)
- [Goals](#goals)
- [Non-Goals](#non-goals)
- [Proposal](#proposal)
- [`CloudProfile` resource](#cloudprofile-resource)
- [`Seed` resource](#seed-resource)
- [`Project` resource](#project-resource)
- [`CloudProfile` Resource](#cloudprofile-resource)
- [`Seed` Resource](#seed-resource)
- [`Project` Resource](#project-resource)
- [`SecretBinding` resource](#secretbinding-resource)
- [`Quota` resource](#quota-resource)
- [`BackupBucket` resource](#backupbucket-resource)
- [`BackupEntry` resource](#backupentry-resource)
- [`Shoot` resource](#shoot-resource)
- [`Quota` Resource](#quota-resource)
- [`BackupBucket` Resource](#backupbucket-resource)
- [`BackupEntry` Resource](#backupentry-resource)
- [`Shoot` Resource](#shoot-resource)
- [`Plant` resource](#plant-resource)

## Summary
Expand Down Expand Up @@ -49,9 +49,9 @@ In order to achieve the same, we have to provide proper APIs.
## Proposal

In GEP-1 we already have proposed a first version for new `CloudProfile` and `Shoot` resources.
In order to deprecate the existing/old `garden.sapcloud.io/v1beta1` API group (and remove it, eventually) we should move all existing resources to the new `core.gardener.cloud/v1beta1` API group.
In order to deprecate the existing/old `garden.sapcloud.io/v1beta1` API group (and remove it, eventually), we should move all existing resources to the new `core.gardener.cloud/v1beta1` API group.

### `CloudProfile` resource
### `CloudProfile` Resource

```yaml
apiVersion: core.gardener.cloud/v1beta1
Expand Down Expand Up @@ -199,7 +199,7 @@ spec:
# id: d61c3912-8422-4daf-835e-854efa0062e4
```

### `Seed` resource
### `Seed` Resource

Special note: The proposal contains fields that are not yet existing in the current `garden.sapcloud.io/v1beta1.Seed` resource, but they should be implemented (open issues that require them are linked).

Expand Down Expand Up @@ -263,7 +263,7 @@ spec:
- key: seed.gardener.cloud/invisible
blockCIDRs:
- 169.254.169.254/32
backup: # See https://github.com/gardener/gardener/blob/master/docs/proposals/02-backupinfra.md.
backup: # See https://github.com/gardener/gardener/blob/master/docs/proposals/02-backupinfra.md
type: <some-provider-name> # {aws,azure,gcp,...}
# region: eu-west-1
secretRef:
Expand All @@ -284,7 +284,7 @@ status:
observedGeneration: 1
```
### `Project` resource
### `Project` Resource

Special note: The `members` and `viewers` field of the `garden.sapcloud.io/v1beta1.Project` resource will be merged together into one `members` field.
Every member will have a role that is either `admin` or `viewer`.
Expand Down Expand Up @@ -358,7 +358,7 @@ quotas: []
# # namespace: namespace-other-than-'garden-core' // optional
```

### `Quota` resource
### `Quota` Resource

Special note: No modifications needed compared to the current `garden.sapcloud.io/v1beta1.Quota` resource.

Expand All @@ -382,7 +382,7 @@ spec:
loadbalancer: "100"
```

### `BackupBucket` resource
### `BackupBucket` Resource

Special note: This new resource is cluster-scoped.

Expand Down Expand Up @@ -429,7 +429,7 @@ status:
observedGeneration: 1
```

### `BackupEntry` resource
### `BackupEntry` Resource

Special note: This new resource is cluster-scoped.

Expand Down Expand Up @@ -476,7 +476,7 @@ status:
observedGeneration: 1
```

### `Shoot` resource
### `Shoot` Resource

Special notes:

Expand Down
18 changes: 9 additions & 9 deletions docs/proposals/05-versioning-policy.md
Original file line number Diff line number Diff line change
Expand Up @@ -4,26 +4,26 @@ Please refer to [this document](../usage/shoot_versions.md) for the documentatio

## Goal

- As a Garden operator I would like to define a clear Kubernetes version policy, which informs my users about deprecated or expired Kubernetes versions.
- As an user of Gardener, I would like to get information which Kubernetes version is supported for how long. I want to be able to get this information via API (cloudprofile) and also in the Dashboard.
- As a Garden operator, I would like to define a clear Kubernetes version policy, which informs my users about deprecated or expired Kubernetes versions.
- As a user of Gardener, I would like to get information which Kubernetes version is supported for how long. I want to be able to get this information via API (cloudprofile) and also in the Dashboard.

## Motivation

The Kubernetes community releases **minor** versions roughly every three months and usually maintains **three minor** versions (the actual and the last two) with bug fixes and security updates. Patch releases are done more frequently. Operators of Gardener should be able to define their own Kubernetes version policy. This GEP suggests the possibility for operators to classify Kubernetes versions, while they are going through their "maintenance life-cycle".
The Kubernetes community releases **minor** versions roughly every three months and usually maintains **three minor** versions (the actual and the last two) with bug fixes and security updates. Patch releases are done more frequently. Operators of Gardener should be able to define their own Kubernetes version policy. This GEP suggests the possibility for operators to classify Kubernetes versions while they are going through their "maintenance life-cycle".

## Kubernetes Version Classifications

An operator should be able to classify Kubernetes versions differently while they go through their "maintenance life-cycle", starting with **preview**, **supported**, **deprecated**, and finally **expired**. This information should be programmatically available in the `cloudprofiles` of the Garden cluster as well as in the Dashboard. Please also note, that Gardener keeps the control plane and the workers on the same Kubernetes version.

For further explanation of the possible classifications, we assume that an operator wants to support four minor versions e.g. v1.16, v1.15, v1.14 and v1.13.
For further explanation of the possible classifications, we assume that an operator wants to support four minor versions, e.g. v1.16, v1.15, v1.14, and v1.13.

- **preview:** After a fresh release of a new Kubernetes **minor** version (e.g. v1.17.0) the operator could tag it as _preview_ until he has gained sufficient experience. It will not become the default in the Gardener Dashboard until he promotes that minor version to _supported_, which could happen a few weeks later with the first patch version.
- **preview:** After a fresh release of a new Kubernetes **minor** version (e.g. v1.17.0), the operator could tag it as _preview_ until he has gained sufficient experience. It will not become the default in the Gardener Dashboard until he promotes that minor version to _supported_, which could happen a few weeks later with the first patch version.

- **supported:** The operator would tag the latest Kubernetes patch versions of the actual (if not still in _preview_) and the last three minor Kubernetes versions as _supported_ (e.g. v1.16.1, v1.15.4, v1.14.9 and v1.13.12). The latest of these becomes the default in the Gardener Dashboard (e.g. v1.16.1).
- **supported:** The operator would tag the latest Kubernetes patch versions of the actual (if not still in _preview_) and the last three minor Kubernetes versions as _supported_ (e.g. v1.16.1, v1.15.4, v1.14.9, and v1.13.12). The latest of these becomes the default in the Gardener Dashboard (e.g. v1.16.1).

- **deprecated:** The operator could decide, that he generally wants to classify every version that is not the latest patch version as _deprecated_ and flag this versions accordingly (e.g. v1.16.0 and older, v1.15.3 and older, 1.14.8 and older as well as v1.13.11 and older). He could also tag all versions (latest or not) of every Kubernetes minor release that is neither the actual nor one of the last three minor Kubernetes versions as _deprecated_, too (e.g. v1.12.x and older). Deprecated versions will eventually expire (i.e., removed).
- **deprecated:** The operator could decide that he generally wants to classify every version that is not the latest patch version as _deprecated_ and flag this versions accordingly (e.g. v1.16.0 and older, v1.15.3 and older, 1.14.8 and older, as well as v1.13.11 and older). He could also tag all versions (latest or not) of every Kubernetes minor release that is neither the actual, nor one of the last three minor Kubernetes versions as _deprecated_, too (e.g. v1.12.x and older). Deprecated versions will eventually expire (i.e., be removed).

- **expired:** This state is a _logical_ state only. It doesn't have to be maintained in the `cloudprofile`. All cluster versions whose `expirationDate` as defined in the `cloudprofile` is expired, are automatically in this _logical_ state. After that date has passed, users cannot create new clusters with that version anymore and any cluster that is on that version will be forcefully migrated in its next maintenance time window, even if the owner has opted out of automatic cluster updates! The forceful update will pick the latest patch version of the current minor Kubernetes version. If the cluster was already on that latest patch version and the latest patch version is also expired, it will continue with latest patch version of the **next minor Kubernetes version**, so **it will result in an update of a minor Kubernetes version, which is potentially harmful to your workload, so you should avoid that/plan ahead!** If that's expired as well, the update process repeats until a non-expired Kubernetes version is reached, so **depending on the circumstances described above, it can happen that the cluster receives multiple consecutive minor Kubernetes version updates!**
- **expired:** This state is a _logical_ state only. It doesn't have to be maintained in the `cloudprofile`. All cluster versions whose `expirationDate` as defined in the `cloudprofile` is expired are automatically in this _logical_ state. After that date has passed, users cannot create new clusters with that version anymore and any cluster that is on that version will be forcefully migrated in its next maintenance time window, even if the owner has opted out of automatic cluster updates! The forceful update will pick the latest patch version of the current minor Kubernetes version. If the cluster was already on that latest patch version and the latest patch version is also expired, it will continue with latest patch version of the **next minor Kubernetes version**, so **it will result in an update of a minor Kubernetes version, which is potentially harmful to your workload, so you should avoid that/plan ahead!** If that's expired as well, the update process repeats until a non-expired Kubernetes version is reached, so, **depending on the circumstances described above, it can happen that the cluster receives multiple consecutive minor Kubernetes version updates!**

To fulfill his specific versioning policy, the Garden operator should be able to classify his versions as well set the expiration date in the `cloudprofiles`. The user should see this classifiers as well as the expiration date in the dashboard.
To fulfill his specific versioning policy, the Garden operator should be able to classify his versions, as well as set the expiration date in the `cloudprofiles`. The user should see these classifiers, as well as the expiration date in the dashboard.

Loading

0 comments on commit c7ec5ac

Please sign in to comment.