diff --git a/docs/deployment/authentication_gardener_control_plane.md b/docs/deployment/authentication_gardener_control_plane.md index aa6e35b2ec6..932ba7a9fe4 100644 --- a/docs/deployment/authentication_gardener_control_plane.md +++ b/docs/deployment/authentication_gardener_control_plane.md @@ -1,16 +1,16 @@ -# Authentication of Gardener control plane components against the Garden cluster +# Authentication of Gardener Control Plane Components Against the Garden Cluster -**Note:** This document refers to Gardener's API server, admission controller, controller manager and scheduler components. Any reference to the term **Gardener control plane component** can be replaced with any of the mentioned above. +> **Note:** This document refers to Gardener's API server, admission controller, controller manager and scheduler components. Any reference to the term **Gardener control plane component** can be replaced with any of the mentioned above. There are several authentication possibilities depending on whether or not [the concept of Virtual Garden](https://github.com/gardener/garden-setup#concept-the-virtual-cluster) is used. ## Virtual Garden is not used, i.e., the `runtime` Garden cluster is also the `target` Garden cluster. -#### Automounted Service Account Token -The easiest way to deploy a **Gardener control plane component** will be to not provide `kubeconfig` at all. This way in-cluster configuration and an automounted service account token will be used. The drawback of this approach is that the automounted token will not be automatically rotated. +### Automounted Service Account Token +The easiest way to deploy a **Gardener control plane component** is to not provide a `kubeconfig` at all. This way in-cluster configuration and an automounted service account token will be used. The drawback of this approach is that the automounted token will not be automatically rotated. -#### Service Account Token Volume Projection -Another solution will be to use [Service Account Token Volume Projection](https://kubernetes.io/docs/tasks/configure-pod-container/configure-service-account/#service-account-token-volume-projection) combined with a `kubeconfig` referencing a token file (see example below). +### Service Account Token Volume Projection +Another solution is to use [Service Account Token Volume Projection](https://kubernetes.io/docs/tasks/configure-pod-container/configure-service-account/#service-account-token-volume-projection) combined with a `kubeconfig` referencing a token file (see the example below). ```yaml apiVersion: v1 kind: Config @@ -35,28 +35,30 @@ This will allow for automatic rotation of the service account token by the `kube ## Virtual Garden is used, i.e., the `runtime` Garden cluster is different from the `target` Garden cluster. -#### Service Account -The easiest way to setup the authentication will be to create a service account and the respective roles will be bound to this service account in the `target` cluster. Then use the generated service account token and craft a `kubeconfig` which will be used by the workload in the `runtime` cluster. This approach does not provide a solution for the rotation of the service account token. However, this setup can be achieved by setting `.Values.global.deployment.virtualGarden.enabled: true` and following these steps: +### Service Account +The easiest way to setup the authentication is to create a service account and the respective roles will be bound to this service account in the `target` cluster. Then use the generated service account token and craft a `kubeconfig`, which will be used by the workload in the `runtime` cluster. This approach does not provide a solution for the rotation of the service account token. However, this setup can be achieved by setting `.Values.global.deployment.virtualGarden.enabled: true` and following these steps: 1. Deploy the `application` part of the charts in the `target` cluster. 2. Get the service account token and craft the `kubeconfig`. 3. Set the crafted `kubeconfig` and deploy the `runtime` part of the charts in the `runtime` cluster. -#### Client Certificate -Another solution will be to bind the roles in the `target` cluster to a `User` subject instead of a service account and use a client certificate for authentication. This approach does not provide a solution for the client certificate rotation. However, this setup can be achieved by setting both `.Values.global.deployment.virtualGarden.enabled: true` and `.Values.global.deployment.virtualGarden..user.name`, then following these steps: +### Client Certificate +Another solution is to bind the roles in the `target` cluster to a `User` subject instead of a service account and use a client certificate for authentication. This approach does not provide a solution for the client certificate rotation. However, this setup can be achieved by setting both `.Values.global.deployment.virtualGarden.enabled: true` and `.Values.global.deployment.virtualGarden..user.name`, then following these steps: 1. Generate a client certificate for the `target` cluster for the respective user. 2. Deploy the `application` part of the charts in the `target` cluster. 3. Craft a `kubeconfig` using the already generated client certificate. 4. Set the crafted `kubeconfig` and deploy the `runtime` part of the charts in the `runtime` cluster. -#### Projected Service Account Token -This approach requires an already deployed and configured [oidc-webhook-authenticator](https://github.com/gardener/oidc-webhook-authenticator) for the `target` cluster. Also the `runtime` cluster should be registered as a trusted identity provider in the `target` cluster. Then projected service accounts tokens from the `runtime` cluster can be used to authenticate against the `target` cluster. The needed steps are as follows: +### Projected Service Account Token +This approach requires an already deployed and configured [oidc-webhook-authenticator](https://github.com/gardener/oidc-webhook-authenticator) for the `target` cluster. Also, the `runtime` cluster should be registered as a trusted identity provider in the `target` cluster. Then, projected service accounts tokens from the `runtime` cluster can be used to authenticate against the `target` cluster. The needed steps are as follows: 1. Deploy [OWA](https://github.com/gardener/oidc-webhook-authenticator) and establish the needed trust. -2. Set `.Values.global.deployment.virtualGarden.enabled: true` and `.Values.global.deployment.virtualGarden..user.name`. **Note:** username value will depend on the trust configuration, e.g., `:system:serviceaccount::` -3. Set `.Values.global..serviceAccountTokenVolumeProjection.enabled: true` and `.Values.global..serviceAccountTokenVolumeProjection.audience`. **Note:** audience value will depend on the trust configuration, e.g., ``. -4. Craft a kubeconfig (see example below). +2. Set `.Values.global.deployment.virtualGarden.enabled: true` and `.Values.global.deployment.virtualGarden..user.name`. + > **Note:** username value will depend on the trust configuration, e.g., `:system:serviceaccount::` +3. Set `.Values.global..serviceAccountTokenVolumeProjection.enabled: true` and `.Values.global..serviceAccountTokenVolumeProjection.audience`. + > **Note:** audience value will depend on the trust configuration, e.g., ``. +4. Craft a kubeconfig (see the example below). 5. Deploy the `application` part of the charts in the `target` cluster. 6. Deploy the `runtime` part of the charts in the `runtime` cluster. diff --git a/docs/deployment/configuring_logging.md b/docs/deployment/configuring_logging.md index e6a7ec52aee..070a6069d75 100644 --- a/docs/deployment/configuring_logging.md +++ b/docs/deployment/configuring_logging.md @@ -1,18 +1,18 @@ -# Configuring the Logging stack via Gardenlet configurations +# Configuring the Logging Stack via gardenlet Configurations -# Enable the Logging +## Enable the Logging -In order to install the Gardener logging stack the `logging.enabled` configuration option has to be enabled in the Gardenlet configuration: +In order to install the Gardener logging stack, the `logging.enabled` configuration option has to be enabled in the Gardenlet configuration: ```yaml logging: enabled: true ``` -From now on each Seed is going to have a logging stack which will collect logs from all pods and some systemd services. Logs related to Shoots with `testing` purpose are dropped in the `fluent-bit` output plugin. Shoots with a purpose different than `testing` have the same type of log aggregator (but different instance) as the Seed. The logs can be viewed in the Grafana in the `garden` namespace for the Seed components and in the respective shoot control plane namespaces. +From now on, each Seed is going to have a logging stack which will collect logs from all pods and some systemd services. Logs related to Shoots with `testing` purpose are dropped in the `fluent-bit` output plugin. Shoots with a purpose different than `testing` have the same type of log aggregator (but different instance) as the Seed. The logs can be viewed in the Grafana in the `garden` namespace for the Seed components and in the respective shoot control plane namespaces. -# Enable logs from the Shoot's node systemd services. +## Enable Logs from the Shoot's Node systemd Services -The logs from the systemd services on each node can be retrieved by enabling the `logging.shootNodeLogging` option in the Gardenlet configuration: +The logs from the systemd services on each node can be retrieved by enabling the `logging.shootNodeLogging` option in the gardenlet configuration: ```yaml logging: enabled: true @@ -22,16 +22,16 @@ logging: - "deployment" ``` -Under the `shootPurpose` section just list all the shoot purposes for which the Shoot node logging feature will be enabled. Specifying the `testing` purpose has no effect because this purpose prevents the logging stack installation. +Under the `shootPurpose` section, just list all the shoot purposes for which the Shoot node logging feature will be enabled. Specifying the `testing` purpose has no effect because this purpose prevents the logging stack installation. Logs can be viewed in the operator Grafana! -The dedicated labels are `unit`, `syslog_identifier` and `nodename` in the `Explore` menu. +The dedicated labels are `unit`, `syslog_identifier`, and `nodename` in the `Explore` menu. -# Configuring the log processor +## Configuring the Log Processor -Under `logging.fluentBit` there is three optional sections. -- `input`: This overwrite the input configuration of the fluent-bit log processor. - - `output`: This overwrite the output configuration of the fluent-bit log processor. - - `service`: This overwrite the service configuration of the fluent-bit log processor. +Under `logging.fluentBit` there are three optional sections: +- `input`: This overwrites the input configuration of the fluent-bit log processor. + - `output`: This overwrites the output configuration of the fluent-bit log processor. + - `service`: This overwrites the service configuration of the fluent-bit log processor. ```yaml logging: @@ -48,9 +48,9 @@ logging: ... ``` -# additional egress IPBlock for allow-fluentbit NetworkPolicy +## Additional egress IPBlock for allow-fluentbit NetworkPolicy -The optional setting under `logging.fluentBit.networkPolicy.additionalEgressIPBlocks` add additional egress IPBlock to `allow-fluentbit` NetworkPolicy to forward logs to a central system. +The optional setting under `logging.fluentBit.networkPolicy.additionalEgressIPBlocks` adds an additional egress IPBlock to `allow-fluentbit` NetworkPolicy to forward logs to a central system. ```yaml logging: @@ -60,9 +60,9 @@ logging: - 123.123.123.123/32 ``` -# Configure central logging +## Configure Central Logging -For central logging, the output configuration of the fluent-bit log processor can be overwritten (`logging.fluentBit.output`) and the Loki instances deployments in Garden and Shoot namespace can be enabled/disabled (`logging.loki.enabled`), by default Loki is enabled. +For central logging, the output configuration of the fluent-bit log processor can be overwritten (`logging.fluentBit.output`) and the Loki instances deployments in the Garden and Shoot namespace can be enabled/disabled (`logging.loki.enabled`), by default Loki is enabled. ```yaml logging: @@ -75,12 +75,12 @@ logging: enabled: false ``` -# Configuring central Loki storage capacity +## Configuring Central Loki Storage Capacity By default, the central Loki has `100Gi` of storage capacity. To overwrite the current central Loki storage capacity, the `logging.loki.garden.storage` setting in the gardenlet's component configuration should be altered. -If you need to increase it you can do so without losing the current data by specifying higher capacity. Doing so, the Loki's `PersistentVolume` capacity will be increased instead of deleting the current PV. -However, if you specify less capacity then the `PersistentVolume` will be deleted and with it the logs, too. +If you need to increase it, you can do so without losing the current data by specifying a higher capacity. By doing so, the Loki's `PersistentVolume` capacity will be increased instead of deleting the current PV. +However, if you specify less capacity, then the `PersistentVolume` will be deleted and with it the logs, too. ```yaml logging: diff --git a/docs/deployment/deploy_gardenlet.md b/docs/deployment/deploy_gardenlet.md index 5635d2106de..6c67d907b05 100644 --- a/docs/deployment/deploy_gardenlet.md +++ b/docs/deployment/deploy_gardenlet.md @@ -1,17 +1,14 @@ # Deploying Gardenlets -Gardenlets act as decentral "agents" to manage shoot clusters of a seed cluster. +Gardenlets act as decentral "agents" to manage the shoot clusters of a seed cluster. -To support scaleability in an automated way, gardenlets are deployed automatically. However, you can still deploy gardenlets manually to be more flexible, for example, when shoot clusters that need to be managed by Gardener are behind a firewall. The gardenlet only requires network connectivity from the gardenlet to the Garden cluster (not the other way round), so it can be used to register Kubernetes clusters with no public endpoint. +To support scaleability in an automated way, gardenlets are deployed automatically. However, you can still deploy gardenlets manually to be more flexible, for example, when the shoot clusters that need to be managed by Gardener are behind a firewall. The gardenlet only requires network connectivity from the gardenlet to the Garden cluster (not the other way round), so it can be used to register Kubernetes clusters with no public endpoint. ## Procedure 1. First, an initial gardenlet needs to be deployed: - * Deploy it manually if you have special requirements. More information: [Deploy a Gardenlet Manually](deploy_gardenlet_manually.md) - * Let the Gardener installer deploy it automatically otherwise. More information: [Automatic Deployment of Gardenlets](deploy_gardenlet_automatically.md) - -1. To add additional seed clusters, it is recommended to use regular shoot clusters. You can do this by creating a `ManagedSeed` resource with a `gardenlet` section as described in [Register Shoot as Seed](../usage/managed_seed.md). - - + * Deploy it manually if you have special requirements. For more information, see [Deploy a Gardenlet Manually](deploy_gardenlet_manually.md). + * Let the Gardener installer deploy it automatically otherwise. For more information, see [Automatic Deployment of Gardenlets](deploy_gardenlet_automatically.md). +1. To add additional seed clusters, it is recommended to use regular shoot clusters. You can do this by creating a `ManagedSeed` resource with a `gardenlet` section as described in [Register Shoot as Seed](../usage/managed_seed.md). diff --git a/docs/deployment/deploy_gardenlet_automatically.md b/docs/deployment/deploy_gardenlet_automatically.md index 0c9482aa60e..34bcf58f042 100755 --- a/docs/deployment/deploy_gardenlet_automatically.md +++ b/docs/deployment/deploy_gardenlet_automatically.md @@ -1,22 +1,21 @@ -# Automatic Deployment of Gardenlets +# Automatic Deployment of gardenlets -The gardenlet can automatically deploy itself into shoot clusters, and register this cluster as a seed cluster. +The gardenlet can automatically deploy itself into shoot clusters, and register a cluster as a seed cluster. These clusters are called "managed seeds" (aka "shooted seeds"). This procedure is the preferred way to add additional seed clusters, because shoot clusters already come with production-grade qualities that are also demanded for seed clusters. ## Prerequisites -The only prerequisite is to register an initial cluster as a seed cluster that has already a gardenlet deployed: +The only prerequisite is to register an initial cluster as a seed cluster that already has a gardenlet deployed in one of the following ways: -* This gardenlet was either deployed as part of a Gardener installation using a setup tool (for example, `gardener/garden-setup`) or -* the gardenlet was deployed manually - - for a step-by-step manual installation Guide see: [Deploy a Gardenlet Manually](deploy_gardenlet_manually.md)) +* The gardenlet was deployed as part of a Gardener installation using a setup tool (for example, `gardener/garden-setup`). +* The gardenlet was deployed manually (for a step-by-step manual installation guide, see [Deploy a Gardenlet Manually](deploy_gardenlet_manually.md)). > The initial cluster can be the garden cluster itself. -## Self-Deployment of Gardenlets in Additional Managed Seed Clusters +## Self-Deployment of gardenlets in Additional Managed Seed Clusters -For a better scalability, you usually need more seed clusters that you can create as follows: +For a better scalability, you usually need more seed clusters that you can create, as follows: 1. Use the initial cluster as the seed cluster for other managed seed clusters. It hosts the control planes of the other seed clusters. 1. The gardenlet deployed in the initial cluster deploys itself automatically into the managed seed clusters. @@ -25,7 +24,5 @@ The advantage of this approach is that there’s only one initial gardenlet inst ## Related Links -[Register Shoot as Seed](../usage/managed_seed.md) - -[garden-setup](http://github.com/gardener/garden-setup) - +- [Register Shoot as Seed](../usage/managed_seed.md) +- [garden-setup](http://github.com/gardener/garden-setup) \ No newline at end of file diff --git a/docs/deployment/deploy_gardenlet_manually.md b/docs/deployment/deploy_gardenlet_manually.md index 63d93fb7ac3..d30cf6c3ec4 100755 --- a/docs/deployment/deploy_gardenlet_manually.md +++ b/docs/deployment/deploy_gardenlet_manually.md @@ -1,4 +1,4 @@ -# Deploy a Gardenlet Manually +# Deploy a gardenlet Manually Manually deploying a gardenlet is required in the following cases: @@ -14,11 +14,11 @@ Manually deploying a gardenlet is required in the following cases: (The gardenlet is not restricted to run in the seed cluster or to be deployed into a Kubernetes cluster at all). -> Once you’ve deployed a gardenlet manually, for example, behind a firewall, you can deploy new gardenlets automatically. The manually deployed gardenlet is then used as a template for the new gardenlets. More information: [Automatic Deployment of Gardenlets](deploy_gardenlet_automatically.md). +> Once you’ve deployed a gardenlet manually, for example, behind a firewall, you can deploy new gardenlets automatically. The manually deployed gardenlet is then used as a template for the new gardenlets. For more information, see [Automatic Deployment of Gardenlets](deploy_gardenlet_automatically.md). ## Prerequisites -### Kubernetes cluster that should be registered as a seed cluster +### Kubernetes Cluster that Should Be Registered as a Seed Cluster - Verify that the cluster has a [supported Kubernetes version](../usage/supported_k8s_versions.md). @@ -26,9 +26,9 @@ Manually deploying a gardenlet is required in the following cases: You need to configure this information in the `Seed` configuration. Gardener uses this information to check that the shoot cluster isn’t created with overlapping CIDR ranges. -- Every Seed cluster needs an Ingress controller which distributes external requests to internal components like grafana and prometheus. Gardener supports two approaches to achieve this: +- Every seed cluster needs an Ingress controller which distributes external requests to internal components like Grafana and Prometheus. Gardener supports two approaches to achieve this: -a. Gardener managed Ingress controller and DNS records. For this configure the following lines in your [Seed resource](../../example/50-seed.yaml): +a. Gardener managed Ingress controller and DNS records. For this, configure the following lines in your [Seed resource](../../example/50-seed.yaml): ```yaml spec: dns: @@ -45,25 +45,25 @@ spec: ``` -⚠ Please note that if you set `.spec.ingress` then `.spec.dns.ingressDomain` must be `nil`. +⚠ Please note that if you set `.spec.ingress`, then `.spec.dns.ingressDomain` must be `nil`. b. Self-managed DNS record and Ingress controller: :warning: -There should exist a DNS record `*.ingress.` where `` is the value of the `.dns.ingressDomain` field of [a Seed cluster resource](../../example/50-seed.yaml) (or the [respective Gardenlet configuration](../../example/20-componentconfig-gardenlet.yaml#L84-L85)). +There should exist a DNS record `*.ingress.`, where `` is the value of the `.dns.ingressDomain` field of [a Seed cluster resource](../../example/50-seed.yaml) (or the [respective Gardenlet configuration](../../example/20-componentconfig-gardenlet.yaml#L84-L85)). *This is how it could be done for the Nginx ingress controller* Deploy nginx into the `kube-system` namespace in the Kubernetes cluster that should be registered as a `Seed`. -Nginx will on most cloud providers create the service with type `LoadBalancer` with an external ip. +Nginx will, on most cloud providers, create the service with type `LoadBalancer` with an external IP. ``` NAME TYPE CLUSTER-IP EXTERNAL-IP nginx-ingress-controller LoadBalancer 10.0.15.46 34.200.30.30 ``` -Create a wildcard `A` record (e.g *.ingress.sweet-seed.. IN A 34.200.30.30) with your DNS provider and point it to the external ip of the ingress service. This ingress domain is later required to register the `Seed` cluster. +Create a wildcard `A` record (e.g *.ingress.sweet-seed.. IN A 34.200.30.30) with your DNS provider and point it to the external IP of the ingress service. This ingress domain is later required to register the `Seed` cluster. Please configure the ingress domain in the `Seed` specification as follows: @@ -73,7 +73,7 @@ spec: ingressDomain: ingress.sweet-seed. ``` -⚠ Please note that if you set `.spec.dns.ingressDomain` then `.spec.ingress` must be `nil`. +⚠ Please note that if you set `.spec.dns.ingressDomain`, then `.spec.ingress` must be `nil`. ### `kubeconfig` for the Seed Cluster @@ -86,7 +86,7 @@ that the gardenlet deployment uses by default to talk to the Seed API server. > If the gardenlet isn’t deployed in the seed cluster, > the gardenlet can be configured to use a `kubeconfig`, > which also requires the above-mentioned privileges, from a mounted directory. -> The `kubeconfig` is specified in section `seedClientConnection.kubeconfig` +> The `kubeconfig` is specified in the `seedClientConnection.kubeconfig` section > of the [Gardenlet configuration](../../example/20-componentconfig-gardenlet.yaml). > This configuration option isn’t used in the following, > as this procedure only describes the recommended setup option @@ -103,19 +103,18 @@ that the gardenlet deployment uses by default to talk to the Seed API server. 1. [Deploy the gardenlet](#deploy-the-gardenlet) 1. [Check that the gardenlet is successfully deployed](#check-that-the-gardenlet-is-successfully-deployed) -## Create a bootstrap token secret in the `kube-system` namespace of the garden cluster +## Create a Bootstrap Token Secret in the `kube-system` Namespace of the Garden Cluster The gardenlet needs to talk to the [Gardener API server](../concepts/apiserver.md) residing in the garden cluster. The gardenlet can be configured with an already existing garden cluster `kubeconfig` in one of the following ways: - - Either by specifying `gardenClientConnection.kubeconfig` - in the [Gardenlet configuration](../../example/20-componentconfig-gardenlet.yaml) or - - - by supplying the environment variable `GARDEN_KUBECONFIG` pointing to + - By specifying `gardenClientConnection.kubeconfig` + in the [Gardenlet configuration](../../example/20-componentconfig-gardenlet.yaml). + - By supplying the environment variable `GARDEN_KUBECONFIG` pointing to a mounted `kubeconfig` file). -The preferred way however, is to use the gardenlets ability to request +The preferred way, however, is to use the gardenlet's ability to request a signed certificate for the garden cluster by leveraging [Kubernetes Certificate Signing Requests](https://kubernetes.io/docs/reference/access-authn-authz/certificate-signing-requests/). The gardenlet performs a TLS bootstrapping process that is similar to the @@ -124,7 +123,7 @@ Make sure that the API server of the garden cluster has [bootstrap token authentication](https://kubernetes.io/docs/reference/access-authn-authz/bootstrap-tokens/#enabling-bootstrap-token-authentication) enabled. -The client credentials required for the gardenlets TLS bootstrapping process, +The client credentials required for the gardenlet's TLS bootstrapping process need to be either `token` or `certificate` (OIDC isn’t supported) and have permissions to create a Certificate Signing Request ([CSR](https://kubernetes.io/docs/reference/access-authn-authz/certificate-signing-requests/)). It’s recommended to use [bootstrap tokens](https://kubernetes.io/docs/reference/access-authn-authz/bootstrap-tokens/) @@ -161,7 +160,7 @@ stringData: When you later prepare the gardenlet Helm chart, a `kubeconfig` based on this token is shared with the gardenlet upon deployment. -## Create RBAC roles for the gardenlet to allow bootstrapping in the garden cluster +## Create RBAC Roles for the gardenlet to Allow Bootstrapping in the Garden Cluster This step is only required if the gardenlet you deploy is the first gardenlet in the Gardener installation. @@ -239,14 +238,14 @@ subjects: ℹ️ After bootstrapping, the gardenlet has full administrative access to the garden cluster. You might be interested to harden this and limit its permissions to only resources related to the seed cluster it is responsible for. -Please take a look into [this document](gardenlet_api_access.md). +Please take a look at [Scoped API Access for Gardenlets](gardenlet_api_access.md). -## Prepare the gardenlet Helm chart +## Prepare the gardenlet Helm Chart This section only describes the minimal configuration, using the global configuration values of the gardenlet Helm chart. For an overview over all values, see the [configuration values](../../charts/gardener/gardenlet/values.yaml). -We refer to the global configuration values as _gardenlet configuration_ in the remaining procedure. +We refer to the global configuration values as _gardenlet configuration_ in the following procedure. 1. Create a gardenlet configuration `gardenlet-values.yaml` based on [this template](https://github.com/gardener/gardener/blob/master/charts/gardener/gardenlet/values.yaml). @@ -274,7 +273,7 @@ We refer to the global configuration values as _gardenlet configuration_ in the token: ``` -3. In section `gardenClientConnection.bootstrapKubeconfig` of your gardenlet configuration, provide the bootstrap `kubeconfig` together with a name and namespace to the gardenlet Helm chart. +3. In the `gardenClientConnection.bootstrapKubeconfig` section of your gardenlet configuration, provide the bootstrap `kubeconfig` together with a name and namespace to the gardenlet Helm chart. ```yaml gardenClientConnection: @@ -287,7 +286,7 @@ We refer to the global configuration values as _gardenlet configuration_ in the The bootstrap `kubeconfig` is stored in the specified secret. -4. In section `gardenClientConnection.kubeconfigSecret` of your gardenlet configuration, +4. In the `gardenClientConnection.kubeconfigSecret` section of your gardenlet configuration, define a name and a namespace where the gardenlet stores the real `kubeconfig` that it creates during the bootstrap process. If the secret doesn't exist, the gardenlet creates it for you. @@ -299,11 +298,11 @@ We refer to the global configuration values as _gardenlet configuration_ in the namespace: garden ``` -### Updating the garden cluster CA +### Updating the Garden Cluster CA -The kubeconfig created by the gardenlet in step 4 will not be recreated as long as it exists, even if a new bootstrap kubeconfig is provided. To enable rotation of the garden cluster CA certificate, a new bundle can be provided via the `gardenClientConnection.gardenClusterCACert` field. If the provided bundle differs from the one currently in the gardenlet's kubeconfig secret then it will be updated. To remove the CA completely (e.g. when switching to a publicly trusted endpoint) this field can be set to either `none` or `null`. +The kubeconfig created by the gardenlet in step 4 will not be recreated as long as it exists, even if a new bootstrap kubeconfig is provided. To enable rotation of the garden cluster CA certificate, a new bundle can be provided via the `gardenClientConnection.gardenClusterCACert` field. If the provided bundle differs from the one currently in the gardenlet's kubeconfig secret then it will be updated. To remove the CA completely (e.g. when switching to a publicly trusted endpoint), this field can be set to either `none` or `null`. -## Automatically register shoot cluster as a seed cluster +## Automatically Register a Shoot Cluster as a Seed Cluster A seed cluster can either be registered by manually creating the [`Seed` resource](../../example/50-seed.yaml) @@ -315,12 +314,12 @@ However, it can also be used to have a streamlined seed cluster registration pro > This procedure doesn’t describe all the possible configurations > for the `Seed` resource. For more information, see: -> * [Example Seed resource](../../example/50-seed.yaml) -> * [Configurable Seed settings](../usage/seed_settings.md). +> - [Example Seed resource](../../example/50-seed.yaml) +> - [Configurable Seed settings](../usage/seed_settings.md) -### Adjust the gardenlet component configuration +### Adjust the gardenlet Component Configuration -1. Supply the `Seed` resource in section `seedConfig` of your gardenlet configuration `gardenlet-values.yaml`. +1. Supply the `Seed` resource in the `seedConfig` section of your gardenlet configuration `gardenlet-values.yaml`. 1. Add the `seedConfig` to your gardenlet configuration `gardenlet-values.yaml`. The field `seedConfig.spec.provider.type` specifies the infrastructure provider type (for example, `aws`) of the seed cluster. For all supported infrastructure providers, see [Known Extension Implementations](../../extensions/README.md#known-extension-implementations). @@ -345,7 +344,7 @@ For all supported infrastructure providers, see [Known Extension Implementations type: ``` -### Optional: Enable HA mode +### Optional: Enable HA Mode You may consider running `gardenlet` with multiple replicas, especially if the seed cluster is configured to host [HA shoot control planes](../usage/shoot_high_availability.md). Therefore, the following Helm chart values define the degree of high availability you want to achieve for the `gardenlet` deployment. @@ -355,7 +354,7 @@ replicaCount: 2 # or more if a higher failure tolerance is required. failureToleranceType: zone # One of `zone` or `node` - defines how replicas are spread. ``` -### Optional: Enable backup and restore +### Optional: Enable Backup and Restore The seed cluster can be set up with backup and restore for the main `etcds` of shoot clusters. @@ -382,7 +381,7 @@ data: # client credentials format is provider specific ``` -Configure the `Seed` resource in section `seedConfig` of your gardenlet configuration to use backup and restore: +Configure the `Seed` resource in the `seedConfig` section of your gardenlet configuration to use backup and restore: ```yaml ... @@ -454,7 +453,7 @@ config: namespace: garden ``` -Deploy the gardenlet Helm chart to the Kubernetes cluster. +Deploy the gardenlet Helm chart to the Kubernetes cluster: ```bash helm install gardenlet charts/gardener/gardenlet \ @@ -470,7 +469,7 @@ This helm chart creates: - The secret (`garden`/`gardenlet-bootstrap-kubeconfig`) containing the bootstrap `kubeconfig`. - The gardenlet deployment in the `garden` namespace. -## Check that the gardenlet is successfully deployed +## Check that the gardenlet Is Successfully Deployed 1. Check that the gardenlets certificate bootstrap was successful. @@ -536,6 +535,5 @@ This helm chart creates: ## Related Links -[Issue #1724: Harden Gardenlet RBAC privileges](https://github.com/gardener/gardener/issues/1724). - -[Backup and Restore](../concepts/backup-restore.md). +- [Issue #1724: Harden Gardenlet RBAC privileges](https://github.com/gardener/gardener/issues/1724). +- [Backup and Restore](../concepts/backup-restore.md). diff --git a/docs/deployment/feature_gates.md b/docs/deployment/feature_gates.md index 7a05e0d0b17..b55e2e9512c 100644 --- a/docs/deployment/feature_gates.md +++ b/docs/deployment/feature_gates.md @@ -4,19 +4,19 @@ This page contains an overview of the various feature gates an administrator can ## Overview -Feature gates are a set of key=value pairs that describe Gardener features. You can turn these features on or off using the a component configuration file for a specific component. +Feature gates are a set of key=value pairs that describe Gardener features. You can turn these features on or off using the component configuration file for a specific component. -Each Gardener component lets you enable or disable a set of feature gates that are relevant to that component. For example this is the configuration of the [gardenlet](../../example/20-componentconfig-gardenlet.yaml) component. +Each Gardener component lets you enable or disable a set of feature gates that are relevant to that component. For example, this is the configuration of the [gardenlet](../../example/20-componentconfig-gardenlet.yaml) component. The following tables are a summary of the feature gates that you can set on different Gardener components. * The “Since” column contains the Gardener release when a feature is introduced or its release stage is changed. * The “Until” column, if not empty, contains the last Gardener release in which you can still use a feature gate. -* If a feature is in the Alpha or Beta state, you can find the feature listed in the Alpha/Beta feature gate table. +* If a feature is in the *Alpha* or *Beta* state, you can find the feature listed in the Alpha/Beta feature gate table. * If a feature is stable you can find all stages for that feature listed in the Graduated/Deprecated feature gate table. * The Graduated/Deprecated feature gate table also lists deprecated and withdrawn features. -## Feature gates for Alpha or Beta features +## Feature Gates for Alpha or Beta Features | Feature | Default | Stage | Since | Until | | -------------------------------------------- | ------- | ------- | ------ | ------ | @@ -37,7 +37,7 @@ The following tables are a summary of the feature gates that you can set on diff | DefaultSeccompProfile | `false` | `Alpha` | `1.54` | | | CoreDNSQueryRewriting | `false` | `Alpha` | `1.55` | | -## Feature gates for graduated or deprecated features +## Feature Gates for Graduated or Deprecated Features | Feature | Default | Stage | Since | Until | |----------------------------------------------|---------|--------------|--------|--------| @@ -110,7 +110,7 @@ The following tables are a summary of the feature gates that you can set on diff | ReversedVPN | `true` | `Beta` | `1.42` | `1.62` | | ReversedVPN | `true` | `GA` | `1.63` | | -## Using a feature +## Using a Feature A feature can be in *Alpha*, *Beta* or *GA* stage. An *Alpha* feature means: @@ -151,9 +151,9 @@ A *General Availability* (GA) feature is also referred to as a *stable* feature. | HVPA | `gardenlet`, `gardener-operator` | Enables simultaneous horizontal and vertical scaling in garden or seed clusters. | | HVPAForShootedSeed | `gardenlet` | Enables simultaneous horizontal and vertical scaling in managed seed (aka "shooted seed") clusters. | | ManagedIstio (deprecated) | `gardenlet` | Enables a Gardener-tailored [Istio](https://istio.io) in each Seed cluster. Disable this feature if Istio is already installed in the cluster. Istio is not automatically removed if this feature is disabled. See the [detailed documentation](../usage/istio.md) for more information. | -| APIServerSNI (deprecated) | `gardenlet` | Enables only one LoadBalancer to be used for every Shoot cluster API server in a Seed. Enable this feature when `ManagedIstio` is enabled or Istio is manually deployed in Seed cluster. See [GEP-8](../proposals/08-shoot-apiserver-via-sni.md) for more details. | +| APIServerSNI (deprecated) | `gardenlet` | Enables only one LoadBalancer to be used for every Shoot cluster API server in a Seed. Enable this feature when `ManagedIstio` is enabled or Istio is manually deployed in the Seed cluster. See [GEP-8](../proposals/08-shoot-apiserver-via-sni.md) for more details. | | SeedChange | `gardener-apiserver` | Enables updating the `spec.seedName` field during shoot validation from a non-empty value in order to trigger shoot control plane migration. | -| ReversedVPN | `gardenlet` | Reverses the connection setup of the vpn tunnel between the Seed and the Shoot cluster(s). It allows Seed and Shoot clusters to be in different networks with only direct access in one direction (Shoot -> Seed). In addition to that, it reduces the amount of load balancers required, i.e. no load balancers are required for the vpn tunnel anymore. It requires `APIServerSNI` and kubernetes version `1.18` or higher to work. Details can be found in [GEP-14](../proposals/14-reversed-cluster-vpn.md). | +| ReversedVPN | `gardenlet` | Reverses the connection setup of the VPN tunnel between the Seed and the Shoot cluster(s). It allows Seed and Shoot clusters to be in different networks with only direct access in one direction (Shoot -> Seed). In addition to that, it reduces the amount of load balancers required, i.e. no load balancers are required for the VPN tunnel anymore. It requires `APIServerSNI` and kubernetes version `1.18` or higher to work. Details can be found in [GEP-14](../proposals/14-reversed-cluster-vpn.md). | | CopyEtcdBackupsDuringControlPlaneMigration | `gardenlet` | Enables the copy of etcd backups from the object store of the source seed to the object store of the destination seed during control plane migration. | | SecretBindingProviderValidation | `gardener-apiserver` | Enables validations on Gardener API server that:
- requires the provider type of a SecretBinding to be set (on SecretBinding creation)
- requires the SecretBinding provider type to match the Shoot provider type (on Shoot creation)
- enforces immutability on the provider type of a SecretBinding | | ForceRestore | `gardenlet` | Enables forcing the shoot's restoration to the destination seed during control plane migration if the preparation for migration in the source seed is not finished after a certain grace period and is considered unlikely to succeed (falling back to the [control plane migration "bad case" scenario](../proposals/17-shoot-control-plane-migration-bad-case.md)). If you enable this feature gate, make sure to also enable `CopyEtcdBackupsDuringControlPlaneMigration`. | diff --git a/docs/deployment/gardenlet_api_access.md b/docs/deployment/gardenlet_api_access.md index 01bd997a0f6..2f7ef784898 100644 --- a/docs/deployment/gardenlet_api_access.md +++ b/docs/deployment/gardenlet_api_access.md @@ -1,8 +1,8 @@ --- -title: Gardenlet API Access +title: gardenlet API Access --- -# Scoped API Access for Gardenlets +# Scoped API Access for gardenlets By default, `gardenlet`s have administrative access in the garden cluster. They are able to execute any API request on any object independent of whether the object is related to the seed cluster the `gardenlet` is responsible for. @@ -20,7 +20,7 @@ It can be translated to Gardener and Gardenlets with their `Seed` and `Shoot` re ## Flow Diagram The following diagram shows how the two plugins are included in the request flow of a `gardenlet`. -When they are not enabled then the `kube-apiserver` is internally authorizing the request via RBAC before forwarding the request directly to the `gardener-apiserver`, i.e., the `gardener-admission-controller` would not be consulted (this is not entirely correct because it also serves other admission webhook handlers, but for simplicity reasons this document focuses on the API access scope only). +When they are not enabled, then the `kube-apiserver` is internally authorizing the request via RBAC before forwarding the request directly to the `gardener-apiserver`, i.e., the `gardener-admission-controller` would not be consulted (this is not entirely correct because it also serves other admission webhook handlers, but for simplicity reasons this document focuses on the API access scope only). When enabling the plugins, there is one additional step for each before the `gardener-apiserver` responds to the request. @@ -51,19 +51,19 @@ Today, the following rules are implemented: | `Namespace` | `get` | `Namespace` -> `Shoot` -> `Seed` | Allow `get` requests for `Namespace`s of `Shoot`s that are assigned to the `gardenlet`'s `Seed`. Always allow `get` requests for the `garden` `Namespace`. | | `Project` | `get` | `Project` -> `Namespace` -> `Shoot` -> `Seed` | Allow `get` requests for `Project`s referenced by the `Namespace` of `Shoot`s that are assigned to the `gardenlet`'s `Seed`. | | `SecretBinding` | `get` | `SecretBinding` -> `Shoot` -> `Seed` | Allow only `get` requests for `SecretBinding`s referenced by `Shoot`s that are assigned to the `gardenlet`'s `Seed`. | -| `Secret` | `create`, `get`, `update`, `patch`, `delete`(, `list`, `watch`) | `Secret` -> `Seed`, `Secret` -> `Shoot` -> `Seed`, `Secret` -> `SecretBinding` -> `Shoot` -> `Seed`, `BackupBucket` -> `Seed` | Allow `get`, `list`, `watch` requests for all `Secret`s in the `seed-` namespace. Allow only `create`, `get`, `update`, `patch`, `delete` requests for the `Secret`s related to resources assigned to the gardenlet`'s `Seed`s. | +| `Secret` | `create`, `get`, `update`, `patch`, `delete`(, `list`, `watch`) | `Secret` -> `Seed`, `Secret` -> `Shoot` -> `Seed`, `Secret` -> `SecretBinding` -> `Shoot` -> `Seed`, `BackupBucket` -> `Seed` | Allow `get`, `list`, `watch` requests for all `Secret`s in the `seed-` namespace. Allow only `create`, `get`, `update`, `patch`, `delete` requests for the `Secret`s related to resources assigned to the `gardenlet`'s `Seed`s. | | `Seed` | `get`, `list`, `watch`, `create`, `update`, `patch`, `delete` | `Seed` | Allow `get`, `list`, `watch` requests for all `Seed`s. Allow only `create`, `update`, `patch`, `delete` requests for the `gardenlet`'s `Seed`s. [1] | -| `ServiceAccount` | `create`, `get`, `update`, `patch`, `delete` | `ServiceAccount` -> `ManagedSeed` -> `Shoot` -> `Seed` | Allow `create`, `get`, `update`, `patch` requests for `ManagedSeed`s in the bootstrapping phase assigned to the gardenlet's `Seed`s. Allow `delete` requests from gardenlets bootstrapped via `ManagedSeed`s. | +| `ServiceAccount` | `create`, `get`, `update`, `patch`, `delete` | `ServiceAccount` -> `ManagedSeed` -> `Shoot` -> `Seed` | Allow `create`, `get`, `update`, `patch` requests for `ManagedSeed`s in the bootstrapping phase assigned to the `gardenlet`'s `Seed`s. Allow `delete` requests from gardenlets bootstrapped via `ManagedSeed`s. | | `Shoot` | `get`, `list`, `watch`, `update`, `patch` | `Shoot` -> `Seed` | Allow `get`, `list`, `watch` requests for all `Shoot`s. Allow only `update`, `patch` requests for `Shoot`s assigned to the `gardenlet`'s `Seed`. | | `ShootState` | `get`, `create`, `update`, `patch` | `ShootState` -> `Shoot` -> `Seed` | Allow only `get`, `create`, `update`, `patch` requests for `ShootState`s belonging by `Shoot`s that are assigned to the `gardenlet`'s `Seed`. | -[1] If you use `ManagedSeed` resources then the gardenlet reconciling them ("parent gardenlet") may be allowed to submit certain requests for the `Seed` resources resulting out of such `ManagedSeed` reconciliations (even if the "parent gardenlet" is not responsible for them): +> [1] If you use `ManagedSeed` resources then the `gardenlet` reconciling them ("parent `gardenlet`") may be allowed to submit certain requests for the `Seed` resources resulting out of such `ManagedSeed` reconciliations (even if the "parent `gardenlet`" is not responsible for them): -- ℹ️ It is allowed to delete the `Seed` resources if the corresponding `ManagedSeed` objects already have a `deletionTimestamp` (this is secure as gardenlets themselves don't have permissions for deleting `ManagedSeed`s). +ℹ️ It is allowed to delete the `Seed` resources if the corresponding `ManagedSeed` objects already have a `deletionTimestamp` (this is secure as `gardenlet`s themselves don't have permissions for deleting `ManagedSeed`s). ## `SeedAuthorizer` Authorization Webhook Enablement -The `SeedAuthorizer` is implemented as [Kubernetes authorization webhook](https://kubernetes.io/docs/reference/access-authn-authz/webhook/) and part of the [`gardener-admission-controller`](../concepts/admission-controller.md) component running in the garden cluster. +The `SeedAuthorizer` is implemented as a [Kubernetes authorization webhook](https://kubernetes.io/docs/reference/access-authn-authz/webhook/) and part of the [`gardener-admission-controller`](../concepts/admission-controller.md) component running in the garden cluster. 🎛 In order to activate it, you have to follow these steps: @@ -93,9 +93,9 @@ The `SeedAuthorizer` is implemented as [Kubernetes authorization webhook](https: current-context: auth-webhook ``` -3. When deploying the [Gardener `controlplane` Helm chart](../../charts/gardener/controlplane), set `.global.rbac.seedAuthorizer.enabled=true`. This will prevent that the RBAC resources granting global access for all gardenlets will be deployed. +3. When deploying the [Gardener `controlplane` Helm chart](../../charts/gardener/controlplane), set `.global.rbac.seedAuthorizer.enabled=true`. This will ensure that the RBAC resources granting global access for all `gardenlet`s will be deployed. -4. Delete the existing RBAC resources granting global access for all gardenlets by running: +4. Delete the existing RBAC resources granting global access for all `gardenlet`s by running: ```bash kubectl delete \ clusterrole.rbac.authorization.k8s.io/gardener.cloud:system:seeds \ @@ -105,7 +105,7 @@ The `SeedAuthorizer` is implemented as [Kubernetes authorization webhook](https: Please note that you should activate the [`SeedRestriction`](#seedrestriction-admission-webhook-enablement) admission handler as well. -> [1] The reason for the fact that `Webhook` authorization plugin should appear after `RBAC` is that the `kube-apiserver` will be depending on the `gardener-admission-controller` (serving the webhook). However, the `gardener-admission-controller` can only start when `gardener-apiserver` runs, but `gardener-apiserver` itself can only start when `kube-apiserver` runs. If `Webhook` is before `RBAC` then `gardener-apiserver` might not be able to start, leading to a deadlock. +> [1] The reason for the fact that `Webhook` authorization plugin should appear after `RBAC` is that the `kube-apiserver` will be depending on the `gardener-admission-controller` (serving the webhook). However, the `gardener-admission-controller` can only start when `gardener-apiserver` runs, but `gardener-apiserver` itself can only start when `kube-apiserver` runs. If `Webhook` is before `RBAC`, then `gardener-apiserver` might not be able to start, leading to a deadlock. ### Authorizer Decisions @@ -118,7 +118,7 @@ As mentioned earlier, it's the authorizer's job to evaluate API requests and ret For backwards compatibility, no requests are denied at the moment, so that they are still deferred to a subsequent authorizer like RBAC. Though, this might change in the future. -First, the `SeedAuthorizer` extracts the `Seed` name from the API request. This requires a proper TLS certificate the `gardenlet` uses to contact the API server and is automatically given if [TLS bootstrapping](../concepts/gardenlet.md#TLS-Bootstrapping) is used. +First, the `SeedAuthorizer` extracts the `Seed` name from the API request. This requires a proper TLS certificate that the `gardenlet` uses to contact the API server and is automatically given if [TLS bootstrapping](../concepts/gardenlet.md#TLS-Bootstrapping) is used. Concretely, the authorizer checks the certificate for name `gardener.cloud:system:seed:` and group `gardener.cloud:system:seeds`. In cases where this information is missing e.g., when a custom Kubeconfig is used, the authorizer cannot make any decision. Thus, RBAC is still a considerable option to restrict the `gardenlet`'s access permission if the above explained preconditions are not given. @@ -126,7 +126,7 @@ With the `Seed` name at hand, the authorizer checks for an **existing path** fro ### Implementation Details -Internally, the `SeedAuthorizer` uses a directed, acyclic graph data structure in order to efficiently respond to authorization requests for gardenlets: +Internally, the `SeedAuthorizer` uses a directed, acyclic graph data structure in order to efficiently respond to authorization requests for `gardenlet`s: * A vertex in this graph represents a Kubernetes resource with its kind, namespace, and name (e.g., `Shoot:garden-my-project/my-shoot`). * An edge from vertex `u` to vertex `v` in this graph exists when @@ -139,14 +139,14 @@ However, there might also be a `ShootState` or a `BackupEntry` resource strictly ![Resource Dependency Graph](content/gardenlet_api_access_graph.png) -In above picture the resources that are actively watched have are shaded. -Gardener resources are green while Kubernetes resources are blue. -It shows the dependencies between the resources and how the graph is built based on above rules. +In the above picture, the resources that are actively watched are shaded. +Gardener resources are green, while Kubernetes resources are blue. +It shows the dependencies between the resources and how the graph is built based on the above rules. -ℹ️ Above picture shows all resources that may be accessed by `gardenlet`s, except for the `Quota` resource which is only included for completeness. +ℹ️ The above picture shows all resources that may be accessed by `gardenlet`s, except for the `Quota` resource which is only included for completeness. -Now, when a `gardenlet` wants to access certain resources then the `SeedAuthorizer` uses a Depth-First traversal starting from the vertex representing the resource in question, e.g., from a `Project` vertex. -If there is a path from the `Project` vertex to the vertex representing the `Seed` the gardenlet is responsible for then it allows the request. +Now, when a `gardenlet` wants to access certain resources, then the `SeedAuthorizer` uses a Depth-First traversal starting from the vertex representing the resource in question, e.g., from a `Project` vertex. +If there is a path from the `Project` vertex to the vertex representing the `Seed` the `gardenlet` is responsible for. then it allows the request. #### Metrics @@ -159,13 +159,13 @@ The `SeedAuthorizer` registers the following metrics related to the mentioned gr #### Debug Handler -When the `.server.enableDebugHandlers` field in the `gardener-admission-controller`'s component configuration is set to `true` then it serves a handler that can be used for debugging the resource dependency graph under `/debug/resource-dependency-graph`. +When the `.server.enableDebugHandlers` field in the `gardener-admission-controller`'s component configuration is set to `true`, then it serves a handler that can be used for debugging the resource dependency graph under `/debug/resource-dependency-graph`. -🚨 Only use this setting for development purposes as it enables unauthenticated users to view all data if they have access to the `gardener-admission-controller` component. +🚨 Only use this setting for development purposes, as it enables unauthenticated users to view all data if they have access to the `gardener-admission-controller` component. The handler renders an HTML page displaying the current graph with a list of vertices and its associated incoming and outgoing edges to other vertices. Depending on the size of the Gardener landscape (and consequently, the size of the graph), it might not be possible to render it in its entirety. -If there are more than 2000 vertices then the default filtering will selected for `kind=Seed` to prevent overloading the output. +If there are more than 2000 vertices, then the default filtering will selected for `kind=Seed` to prevent overloading the output. _Example output_: @@ -216,7 +216,7 @@ However, this does only work for vertices belonging to resources that are only c For example, the vertex for a `SecretBinding` can either be created in the `SecretBinding` handler itself or in the `Shoot` handler. In such cases, deleting the vertex before (re-)computing the edges might lead to race conditions and potentially renders the graph invalid. Consequently, instead of deleting the vertex, only the edges the respective handler is responsible for are deleted. -If the vertex ends up with no remaining edges then it also gets deleted automatically. +If the vertex ends up with no remaining edges, then it also gets deleted automatically. Afterwards, the vertex can either be added again or the updated edges can be created. ## `SeedRestriction` Admission Webhook Enablement @@ -230,6 +230,6 @@ Please note that it should only be activated when the `SeedAuthorizer` is active ### Admission Decisions The admission's purpose is to perform extended validation on requests which require the body of the object in question. -Additionally, it handles `CREATE` requests of gardenlets (above discussed resource dependency graph cannot be used in such cases because there won't be any vertex/edge for non-existing resources). +Additionally, it handles `CREATE` requests of `gardenlet`s (the above discussed resource dependency graph cannot be used in such cases because there won't be any vertex/edge for non-existing resources). Gardenlets are restricted to only create new resources which are somehow related to the seed clusters they are responsible for. diff --git a/docs/deployment/getting_started_locally.md b/docs/deployment/getting_started_locally.md index b5355e2af9b..9a3a765c2f1 100644 --- a/docs/deployment/getting_started_locally.md +++ b/docs/deployment/getting_started_locally.md @@ -1,8 +1,10 @@ -# Deploying Gardener locally +# Deploying Gardener Locally This document will walk you through deploying Gardener on your local machine. If you encounter difficulties, please open an issue so that we can make this process easier. +## Overview + Gardener runs in any Kubernetes cluster. In this guide, we will start a [KinD](https://kind.sigs.k8s.io/) cluster which is used as both garden and seed cluster (please refer to the [architecture overview](../concepts/architecture.md)) for simplicity. @@ -17,12 +19,12 @@ Based on [Skaffold](https://skaffold.dev/), the container images for all require > Please note that 8 CPU / 8Gi memory might not be enough for more than two `Shoot` clusters, i.e., you might need to increase these values if you want to run additional `Shoot`s. > If you plan on following the optional steps to [create a second seed cluster](#optional-setting-up-a-second-seed-cluster), the required resources will be more - at least `10` CPUs and `18Gi` memory. Additionally, please configure at least `120Gi` of disk size for the Docker daemon. - > Tip: With `docker system df` and `docker system prune -a` you can cleanup unused data. + > Tip: You can clean up unused data with `docker system df` and `docker system prune -a`. - Make sure the `kind` docker network is using the CIDR `172.18.0.0/16`. - If the network does not exist, it can be created with `docker network create kind --subnet 172.18.0.0/16` - If the network already exists, the CIDR can be checked with `docker network inspect kind | jq '.[].IPAM.Config[].Subnet'`. If it is not `172.18.0.0/16`, delete the network with `docker network rm kind` and create it with the command above. -## Setting up the KinD cluster (garden and seed) +## Setting Up the KinD Cluster (Garden and Seed) ```bash make kind-up @@ -30,12 +32,12 @@ make kind-up This command sets up a new KinD cluster named `gardener-local` and stores the kubeconfig in the `./example/gardener-local/kind/local/kubeconfig` file. -> It might be helpful to copy this file to `$HOME/.kube/config` since you will need to target this KinD cluster multiple times. +> It might be helpful to copy this file to `$HOME/.kube/config`, since you will need to target this KinD cluster multiple times. Alternatively, make sure to set your `KUBECONFIG` environment variable to `./example/gardener-local/kind/local/kubeconfig` for all future steps via `export KUBECONFIG=example/gardener-local/kind/local/kubeconfig`. -All following steps assume that you are using this kubeconfig. +All of the following steps assume that you are using this kubeconfig. -Additionally, this command also deploys a local container registry to the cluster as well as a few registry mirrors, that are set up as a pull-through cache for all upstream registries Gardener uses by default. +Additionally, this command also deploys a local container registry to the cluster, as well as a few registry mirrors, that are set up as a pull-through cache for all upstream registries Gardener uses by default. This is done to speed up image pulls across local clusters. The local registry can be accessed as `localhost:5001` for pushing and pulling. The storage directories of the registries are mounted to the host machine under `dev/local-registry`. @@ -44,18 +46,18 @@ With this, mirrored images don't have to be pulled again after recreating the cl The command also deploys a default [calico](https://github.com/projectcalico/calico) installation as the cluster's CNI implementation with `NetworkPolicy` support (the default `kindnet` CNI doesn't provide `NetworkPolicy` support). Furthermore, it deploys the [metrics-server](https://github.com/kubernetes-sigs/metrics-server) in order to support HPA and VPA on the seed cluster. -## Setting up Gardener +## Setting Up Gardener ```bash make gardener-up ``` -This will first build the images based (which might take a bit if you do it for the first time). +This will first build the base image (which might take a bit if you do it for the first time). Afterwards, the Gardener resources will be deployed into the cluster. -## Creating a `Shoot` cluster +## Creating a `Shoot` Cluster -You can wait for the `Seed` to be ready by running +You can wait for the `Seed` to be ready by running: ```bash kubectl wait --for=condition=gardenletready --for=condition=extensionsready --for=condition=bootstrapped seed local --timeout=5m @@ -68,13 +70,13 @@ NAME STATUS PROVIDER REGION AGE VERSION K8S VERSION local Ready local local 4m42s vX.Y.Z-dev v1.21.1 ``` -In order to create a first shoot cluster, just run +In order to create a first shoot cluster, just run: ```bash kubectl apply -f example/provider-local/shoot.yaml ``` -You can wait for the `Shoot` to be ready by running +You can wait for the `Shoot` to be ready by running: ```bash kubectl wait --for=condition=apiserveravailable --for=condition=controlplanehealthy --for=condition=everynodeready --for=condition=systemcomponentshealthy shoot local -n garden-local --timeout=10m @@ -87,16 +89,16 @@ NAME CLOUDPROFILE PROVIDER REGION K8S VERSION HIBERNATION LAST OPER local local local local 1.21.0 Awake Create Processing (43%) healthy 94s ``` -(Optional): You could also execute a simple e2e test (creating and deleting a shoot) by running +(Optional): You could also execute a simple e2e test (creating and deleting a shoot) by running: ```shell make test-e2e-local-simple KUBECONFIG="$PWD/example/gardener-local/kind/local/kubeconfig" ``` -### Accessing the `Shoot` cluster +### Accessing the `Shoot` Cluster -⚠️ Please note that in this setup shoot clusters are not accessible by default when you download the kubeconfig and try to communicate with them. -The reason is that your host most probably cannot resolve the DNS names of the clusters since `provider-local` extension runs inside the KinD cluster (see [this](../extensions/provider-local.md#dnsrecord) for more details). +⚠️ Please note that in this setup, shoot clusters are not accessible by default when you download the kubeconfig and try to communicate with them. +The reason is that your host most probably cannot resolve the DNS names of the clusters since `provider-local` extension runs inside the KinD cluster (for more details, see [DNSRecord](../extensions/provider-local.md#dnsrecord)). Hence, if you want to access the shoot cluster, you have to run the following command which will extend your `/etc/hosts` file with the required information to make the DNS names resolvable: ```bash @@ -130,14 +132,14 @@ cat < /tmp/kubeconfig-shoot-local.yaml kubectl --kubeconfig=/tmp/kubeconfig-shoot-local.yaml get nodes ``` -## (Optional): Setting up a second seed cluster +## (Optional): Setting Up a Second Seed Cluster There are cases where you would want to create a second cluster seed in your local setup. For example, if you want to test the [control plane migration](../usage/control_plane_migration.md) feature. The following steps describe how to do that. @@ -169,31 +171,31 @@ NAME STATUS PROVIDER REGION AGE VERSION K8S VERSION local2 Ready local local 4m42s vX.Y.Z-dev v1.21.1 ``` -If you want to perform control plane migration you can follow the steps outlined [here](../usage/control_plane_migration.md) to migrate the shoot cluster to the second seed you just created. +If you want to perform control plane migration, you can follow the steps outlined in [Control Plane Migration](../usage/control_plane_migration.md) to migrate the shoot cluster to the second seed you just created. -## Deleting the `Shoot` cluster +## Deleting the `Shoot` Cluster ```shell ./hack/usage/delete shoot local garden-local ``` -## (Optional): Tear down the second seed cluster +## (Optional): Tear Down the Second Seed Cluster ``` shell make kind2-down ``` -## Tear down the Gardener environment +## Tear Down the Gardener Environment ```shell make kind-down ``` -## Remote local setup +## Remote Local Setup Just like Prow is executing the KinD based integration tests in a K8s pod, it is -possible to interactively run this KinD based Gardener development environment -aka "local setup" in a "remote" K8s pod. +possible to interactively run this KinD based Gardener development environment, +aka "local setup", in a "remote" K8s pod. ```shell k apply -f docs/deployment/content/remote-local-setup.yaml @@ -221,6 +223,6 @@ The port forward in the remote-local-setup pod to the respective component: k port-forward -n shoot--local--local deployment/grafana-operators 3000 ``` -## Further reading +## Related Links -This setup makes use of the local provider extension. You can read more about it in [this document](../extensions/provider-local.md). +- [Local Provider Extension](../extensions/provider-local.md) diff --git a/docs/deployment/getting_started_locally_with_extensions.md b/docs/deployment/getting_started_locally_with_extensions.md index 1833a8e03bd..8c4e768d33f 100644 --- a/docs/deployment/getting_started_locally_with_extensions.md +++ b/docs/deployment/getting_started_locally_with_extensions.md @@ -1,11 +1,13 @@ -# Deploying Gardener locally and enabling provider-extensions +# Deploying Gardener Locally and Enabling Provider-Extensions This document will walk you through deploying Gardener on your local machine and bootstrapping your own seed clusters on an existing Kubernetes cluster. -It is supposed to run your local Gardener developments on a real infrastructure. For running Gardener only entirely local, please check the [getting started locally](getting_started_locally.md) docs. +It is supposed to run your local Gardener developments on a real infrastructure. For running Gardener only entirely local, please check the [getting started locally](getting_started_locally.md) documentation. If you encounter difficulties, please open an issue so that we can make this process easier. +## Overview + Gardener runs in any Kubernetes cluster. -In this guide, we will start a [KinD](https://kind.sigs.k8s.io/) cluster which is used as garden cluster. Any Kubernetes cluster could be used as seed clusters in order to support provider extensions (please refer to the [architecture overview](../concepts/architecture.md)). This guide is tested for using Kubernetes Clusters provided by Gardener, AWS, Azure and GCP as seed so far. +In this guide, we will start a [KinD](https://kind.sigs.k8s.io/) cluster which is used as garden cluster. Any Kubernetes cluster could be used as seed clusters in order to support provider extensions (please refer to the [architecture overview](../concepts/architecture.md)). This guide is tested for using Kubernetes clusters provided by Gardener, AWS, Azure, and GCP as seed so far. Based on [Skaffold](https://skaffold.dev/), the container images for all required components will be built and deployed into the clusters (via their [Helm charts](https://helm.sh/)). @@ -13,18 +15,18 @@ Based on [Skaffold](https://skaffold.dev/), the container images for all require ## Prerequisites -- Make sure that you have prepared your setup and checked out Gardener sources as described by the [Local Setup guide](../development/local_setup.md). -- Make sure your Docker daemon is up-to-date, up and running and has enough resources (at least `8` CPUs and `8Gi` memory; see [here](https://docs.docker.com/desktop/settings/mac/) how to configure the resources for Docker for Mac). +- Make sure that you have prepared your setup and checked out Gardener sources as described in the [Local Setup guide](../development/local_setup.md). +- Make sure your Docker daemon is up-to-date, up and running and has enough resources (at least `8` CPUs and `8Gi` memory; see the [Docker documentation](https://docs.docker.com/desktop/settings/mac/) for how to configure the resources for Docker for Mac). > Additionally, please configure at least `120Gi` of disk size for the Docker daemon. - > Tip: With `docker system df` and `docker system prune -a` you can clean up unused data. + > Tip: You can clean up unused data with `docker system df` and `docker system prune -a`. - Make sure that you have access to a Kubernetes cluster you can use as a seed cluster in this setup. - The seed cluster requires at least 16 CPUs in total to run one shoot cluster - You could use any Kubernetes cluster for your seed cluster. However, using a Gardener shoot cluster for your seed simplifies some configuration steps. - - When bootstrapping `gardenlet` to the cluster your new seed will have the same provider type as the shoot cluster you use - an AWS shoot will become an AWS seed, an GCP shoot will become an GCP seed etc. (only relevant when using a Gardener shoot as seed). + - When bootstrapping `gardenlet` to the cluster, your new seed will have the same provider type as the shoot cluster you use - an AWS shoot will become an AWS seed, a GCP shoot will become a GCP seed, etc. (only relevant when using a Gardener shoot as seed). -## Provide Infrastructure Credentials And Configuration +## Provide Infrastructure Credentials and Configuration -As this setup is running on a real infrastructure, you have to provide credentials for DNS, the infrastructure and the kubeconfig for Gardener cluster you want to use as seed. +As this setup is running on a real infrastructure, you have to provide credentials for DNS, the infrastructure, and the kubeconfig for the Gardener cluster you want to use as seed. > There are `.gitignore` entries for all files and directories which include credentials. Nevertheless, please double check and make sure that credentials are not commited. @@ -48,15 +50,15 @@ Additionally, please maintain the configuration of your seed in `./example/provi Using a Gardener cluster as seed simplifies the process, because some configuration options can be taken from `shoot-info` and creating DNS entries and TLS certificates is automated. -However, you can use different Kubernetes clusters for your seed too and configure these things manually. Please configure the options of `./example/provider-extensions/gardenlet/values.yaml` upfront. For configuring DNS and TLS certificates, `make gardener-extensions-up` , which is explained later, will pause and tell you what to do. +However, you can use different Kubernetes clusters for your seed too and configure these things manually. Please configure the options of `./example/provider-extensions/gardenlet/values.yaml` upfront. For configuring DNS and TLS certificates, `make gardener-extensions-up`, which is explained later, will pause and tell you what to do. ### External Controllers -You might plan to deploy and register external controllers for networking, operating system, providers, etc.. Please put `ControllerDeployment`s and `ControllerRegistration`s into `./example/provider-extensions/garden/controllerregistrations` directory. The whole content of this folder will be applied to your KinD cluster. +You might plan to deploy and register external controllers for networking, operating system, providers, etc. Please put `ControllerDeployment`s and `ControllerRegistration`s into the S`./example/provider-extensions/garden/controllerregistrations` directory. The whole content of this folder will be applied to your KinD cluster. ### `CloudProfile`s -There are no demo `CloudProfiles` yet. Thus, please copy `CloudProfiles` from another landscape to `./example/provider-extensions/garden/cloudprofiles` directory or create your own `CloudProfiles` based on the [gardener examples](../../example/30-cloudprofile.yaml). Please check the GitHub repository of your desired provider-extension. Most of them include example `CloudProfile`s. All files you place in this folder will be applied to your KinD cluster. +There are no demo `CloudProfiles` yet. Thus, please copy `CloudProfiles` from another landscape to the `./example/provider-extensions/garden/cloudprofiles` directory or create your own `CloudProfiles` based on the [gardener examples](../../example/30-cloudprofile.yaml). Please check the GitHub repository of your desired provider-extension. Most of them include example `CloudProfile`s. All files you place in this folder will be applied to your KinD cluster. -## Setting Up The KinD Cluster +## Setting Up the KinD Cluster ```bash make kind-extensions-up @@ -64,10 +66,10 @@ make kind-extensions-up This command sets up a new KinD cluster named `gardener-local` and stores the kubeconfig in the `./example/gardener-local/kind/extensions/kubeconfig` file. -> It might be helpful to copy this file to `$HOME/.kube/config` since you will need to target this KinD cluster multiple times. +> It might be helpful to copy this file to `$HOME/.kube/config`, since you will need to target this KinD cluster multiple times. Alternatively, make sure to set your `KUBECONFIG` environment variable to `./example/gardener-local/kind/extensions/kubeconfig` for all future steps via `export KUBECONFIG=$PWD/example/gardener-local/kind/extensions/kubeconfig`. -All following steps assume that you are using this kubeconfig. +All of the following steps assume that you are using this kubeconfig. Additionally, this command deploys a local container registry to the cluster as well as a few registry mirrors that are set up as a pull-through cache for all upstream registries Gardener uses by default. This is done to speed up image pulls across local clusters. @@ -86,9 +88,9 @@ make gardener-extensions-up This will first prepare the basic configuration of your KinD and Gardener clusters. Afterwards, the images for the Garden cluster are built and deployed into the KinD cluster. -Finally, the images for the Seed cluster are built, pushed to a container registry on the Seed and the `gardenlet` is started. +Finally, the images for the Seed cluster are built, pushed to a container registry on the Seed, and the `gardenlet` is started. -## Pause And Unpause The KinD Cluster +## Pause and Unpause the KinD Cluster The KinD cluster can be paused by stopping and keeping its docker container. This can be done by running: @@ -102,7 +104,7 @@ This provides the option to switch off your local KinD cluster fast without leav ## Creating a `Shoot` Cluster -You can wait for the `Seed` to be ready by running +You can wait for the `Seed` to be ready by running: ```bash kubectl wait --for=condition=gardenletready seed provider-extensions --timeout=5m @@ -130,21 +132,21 @@ azure az azure westeurope 1.24.2 Awake gcp gcp gcp europe-west1 1.24.3 Awake Create Processing (43%) healthy 94s ``` -### Accessing The `Shoot` Cluster +### Accessing the `Shoot` Cluster -Your shoot clusters will have a public DNS entries for their API servers, so that they are could be reached via the Internet via `kubectl` after you created their `kubeconfig`. +Your shoot clusters will have a public DNS entries for their API servers, so that they could be reached via the Internet via `kubectl` after you have created their `kubeconfig`. -We encourage you to use the [adminkubeconfig subresource](../proposals/16-adminkubeconfig-subresource.md) for accessing your shoot cluster. You find an example how to use it in our [docs](../usage/shoot_access.md#shootsadminkubeconfig-subresource). +We encourage you to use the [adminkubeconfig subresource](../proposals/16-adminkubeconfig-subresource.md) for accessing your shoot cluster. You can find an example how to use it in [Accessing Shoot Clusters](../usage/shoot_access.md#shootsadminkubeconfig-subresource). -## Deleting The `Shoot` Clusters +## Deleting the `Shoot` Clusters -Before tearing down your environment you have to delete your shoot clusters. This is highly recommended because otherwise you would leave orphaned items on your infrastructure accounts. +Before tearing down your environment, you have to delete your shoot clusters. This is highly recommended because otherwise you would leave orphaned items on your infrastructure accounts. ```bash ./hack/usage/delete shoot garden-local ``` -## Tear Down The Gardener Environment +## Tear Down the Gardener Environment Before you delete your local KinD cluster, you should shut down your `Shoots` and `Seed` in a clean way to avoid orphaned infrastructure elements in your projects. @@ -154,9 +156,9 @@ Please ensure that your KinD and Seed clusters are online (not paused or hiberna make gardener-extensions-down ``` -This will delete all `Shoots` first (this could take a couple of minutes), then uninstall `gardenlet` from the Seed and the gardener components from the KinD. Finally, the additional components like container registry etc. are deleted from both clusters. +This will delete all `Shoots` first (this could take a couple of minutes), then uninstall `gardenlet` from the Seed and the gardener components from the KinD. Finally, the additional components like container registry, etc. are deleted from both clusters. -When this is done, you can securely delete your local KinD cluster by: +When this is done, you can securely delete your local KinD cluster by running: ```bash make kind-extensions-clean diff --git a/docs/deployment/image_vector.md b/docs/deployment/image_vector.md index 94f738c22af..5531dc1cfad 100644 --- a/docs/deployment/image_vector.md +++ b/docs/deployment/image_vector.md @@ -25,7 +25,7 @@ images: ... ``` -That means that the Gardenlet will use the `pause-container` in with tag `3.4` for all seed/shoot clusters with Kubernetes version `1.20.x`, and tag `3.5` for all clusters with Kubernetes `>= 1.21`. +That means that the Gardenlet will use the `pause-container` with tag `3.4` for all seed/shoot clusters with Kubernetes version `1.20.x`, and tag `3.5` for all clusters with Kubernetes `>= 1.21`. ## Image Vector Architecture @@ -53,21 +53,21 @@ images: ... ``` -Architectures is an optional field of image. It is a list of strings specifying CPU architecture of machines on which this image can be used. The valid options for architectures field are as follows: -- `amd64` : This specifies image can run only on machines having CPU architecture `amd64`. -- `arm64` : This specifies image can run only on machines having CPU architecture `arm64`. +`architectures` is an optional field of image. It is a list of strings specifying CPU architecture of machines on which this image can be used. The valid options for the architectures field are as follows: +- `amd64` : This specifies that the image can run only on machines having CPU architecture `amd64`. +- `arm64` : This specifies that the image can run only on machines having CPU architecture `arm64`. -If image doesn't specify any architectures then by default it is considered to support both `amd64` and `arm64` architectures. +If an image doesn't specify any architectures, then by default it is considered to support both `amd64` and `arm64` architectures. -## Overwrite image vector +## Overwrite Image Vector -In some environment it is not possible to use these "pre-defined" images that come with a Gardener release. -A prominent example for that is Alicloud in China which does not allow access to Google's GCR. -In these cases you might want to overwrite certain images, e.g., point the `pause-container` to a different registry. +In some environments it is not possible to use these "pre-defined" images that come with a Gardener release. +A prominent example for that is Alicloud in China, which does not allow access to Google's GCR. +In these cases, you might want to overwrite certain images, e.g., point the `pause-container` to a different registry. -:warning: If you specify an image that does not fit to the resource manifest then the seed/shoot reconciliation might fail. +:warning: If you specify an image that does not fit to the resource manifest, then the seed/shoot reconciliation might fail. -In order to overwrite the images you must provide a similar file to Gardenlet: +In order to overwrite the images, you must provide a similar file to gardenlet: ```yaml images: @@ -84,8 +84,8 @@ images: ... ``` -During deployment of the gardenlet create a `ConfigMap` containing the above content and mount it as a volume into the gardenlet pod. -Next, specify the environment variable `IMAGEVECTOR_OVERWRITE` whose value must be the path to the file you just mounted: +During deployment of the gardenlet, create a `ConfigMap` containing the above content and mount it as a volume into the gardenlet pod. +Next, specify the environment variable `IMAGEVECTOR_OVERWRITE`, whose value must be the path to the file you just mounted: ```yaml apiVersion: v1 @@ -123,7 +123,7 @@ spec: ... ``` -## Image vectors for dependent components +## Image Vectors for Dependent Components The gardenlet is deploying a lot of different components that might deploy other images themselves. These components might use an image vector as well. diff --git a/docs/deployment/secret_binding_provider_controller.md b/docs/deployment/secret_binding_provider_controller.md index bde7f9c524b..2514af15bca 100644 --- a/docs/deployment/secret_binding_provider_controller.md +++ b/docs/deployment/secret_binding_provider_controller.md @@ -4,7 +4,7 @@ This page describes the process on how to enable the SecretBinding provider cont ## Overview -With Gardener v1.38.0 the SecretBinding resource does now contain a new optional field `.provider.type` (details about the motivation can be found in https://github.com/gardener/gardener/issues/4888). To make the process of setting the new field automated and afterwards to enforce validation on the new field in backwards compatible manner, Gardener features the SecretBinding provider controller and a feature gate - `SecretBindingProviderValidation`. +With Gardener v1.38.0, the SecretBinding resource now contains a new optional field `.provider.type` (details about the motivation can be found in https://github.com/gardener/gardener/issues/4888). To make the process of setting the new field automated and afterwards to enforce validation on the new field in backwards compatible manner, Gardener features the SecretBinding provider controller and a feature gate - `SecretBindingProviderValidation`. ## Process @@ -12,14 +12,14 @@ A Gardener landscape operator can follow the following steps: 1. Enable the SecretBinding provider controller of Gardener Controller Manager. - The SecretBinding provider controller is responsible to populate the `.provider.type` field of a SecretBinding based on its current usage by Shoot resources. For example if a Shoot `crazy-botany` with `.provider.type=aws` is using a SecretBinding `my-secret-binding`, then the SecretBinding provider controller will take care to set the `.provider.type` field of the SecretBinding to the same provider type (`aws`). - To enable the SecretBinding provider controller, in the ControllerManagerConfiguration set the `controller.secretBindingProvider.concurentSyncs` field (e.g set it to `5`). - Although that it is not recommended, the API allows Shoots from different provider types to reference the same SecretBinding (assuming that backing Secret contains data for both of the provider types). To preserve the backwards compatibility for such SecretBindings, the provider controller will maintain the multiple provider types in the field (it will join them with separator `,` - for example `aws,gcp`). + The SecretBinding provider controller is responsible for populating the `.provider.type` field of a SecretBinding based on its current usage by Shoot resources. For example, if a Shoot `crazy-botany` with `.provider.type=aws` is using a SecretBinding `my-secret-binding`, then the SecretBinding provider controller will take care to set the `.provider.type` field of the SecretBinding to the same provider type (`aws`). + To enable the SecretBinding provider controller, set the `controller.secretBindingProvider.concurentSyncs` field in the ControllerManagerConfiguration (e.g set it to `5`). + Although that it is not recommended, the API allows Shoots from different provider types to reference the same SecretBinding (assuming that the backing Secret contains data for both of the provider types). To preserve the backwards compatibility for such SecretBindings, the provider controller will maintain the multiple provider types in the field (it will join them with the separator `,` - for example `aws,gcp`). -2. Disable the SecretBinding provider controller and enable `SecretBindingProviderValidation` feature gate of Gardener API server. +2. Disable the SecretBinding provider controller and enable the `SecretBindingProviderValidation` feature gate of Gardener API server. - The `SecretBindingProviderValidation` feature gate of Gardener API server enables set of validations for the SecretBinding provider field. It forbids creating a Shoot that has a different provider type from the referenced SecretBinding's one. It also enforces immutability on the field. - After making sure that SecretBinding provider controller is enabled and it populated the `.provider.type` field of a majority of the SecretBindings on a Gardener landscape (the SecretBindings that are unused will have their provider type unset), a Gardener landscape operator has to disable the SecretBinding provider controller and to enable the `SecretBindingProviderValidation` feature gate of Gardener API server. To disable the SecretBinding provider controller, in the ControllerManagerConfiguration set the `controller.secretBindingProvider.concurentSyncs` field to `0`. + The `SecretBindingProviderValidation` feature gate of Gardener API server enables a set of validations for the SecretBinding provider field. It forbids creating a Shoot that has a different provider type from the referenced SecretBinding's one. It also enforces immutability on the field. + After making sure that SecretBinding provider controller is enabled and it populated the `.provider.type` field of a majority of the SecretBindings on a Gardener landscape (the SecretBindings that are unused will have their provider type unset), a Gardener landscape operator has to disable the SecretBinding provider controller and to enable the `SecretBindingProviderValidation` feature gate of Gardener API server. To disable the SecretBinding provider controller, set the `controller.secretBindingProvider.concurentSyncs` field in the ControllerManagerConfiguration to `0`. ## Implementation History diff --git a/docs/deployment/setup_gardener.md b/docs/deployment/setup_gardener.md index 8a7dc742a87..8b705d6008f 100644 --- a/docs/deployment/setup_gardener.md +++ b/docs/deployment/setup_gardener.md @@ -1,21 +1,21 @@ -# Deploying the Gardener into a Kubernetes cluster +# Deploying Gardener into a Kubernetes Cluster -Similar to Kubernetes, Gardener consists out of control plane components (Gardener API server, Gardener controller manager, Gardener scheduler), and an agent component (Gardenlet). -The control plane is deployed in the so-called garden cluster while the agent is installed into every seed cluster. -Please note that it is possible to use the garden cluster as seed cluster by simply deploying the Gardenlet into it. +Similarly to Kubernetes, Gardener consists out of control plane components (Gardener API server, Gardener controller manager, Gardener scheduler), and an agent component (gardenlet). +The control plane is deployed in the so-called garden cluster, while the agent is installed into every seed cluster. +Please note that it is possible to use the garden cluster as seed cluster by simply deploying the gardenlet into it. We are providing [Helm charts](../../charts/gardener) in order to manage the various resources of the components. Please always make sure that you use the Helm chart version that matches the Gardener version you want to deploy. -## Deploying the Gardener control plane (API server, admission controller, controller manager, scheduler) +## Deploying the Gardener Control Plane (API Server, Admission Controller, Controller Manager, Scheduler) The [configuration values](../../charts/gardener/controlplane/values.yaml) depict the various options to configure the different components. -Please consult [this document](../usage/configuration.md) for component specific configurations and [this document](./authentication_gardener_control_plane.md) for authentication related specifics. +Please consult [Gardener Configuration and Usage](../usage/configuration.md) for component specific configurations and [Authentication of Gardener Control Plane Components Against the Garden Cluster](./authentication_gardener_control_plane.md) for authentication related specifics. -Also note that all resources and deployments need to be created in the `garden` namespace (not overrideable). +Also, note that all resources and deployments need to be created in the `garden` namespace (not overrideable). If you enable the Gardener admission controller as part of you setup, please make sure the `garden` namespace is labelled with `app: gardener`. Otherwise, the backing service account for the admission controller Pod might not be created successfully. -No action is necessary, if you deploy the `garden` namespace with the Gardener control plane Helm chart. +No action is necessary if you deploy the `garden` namespace with the Gardener control plane Helm chart. After preparing your values in a separate `controlplane-values.yaml` file ([values.yaml](../../charts/gardener/controlplane/values.yaml) can be used as starting point), you can run the following command against your garden cluster: @@ -27,13 +27,13 @@ helm install charts/gardener/controlplane \ --wait ``` -## Deploying Gardener extensions +## Deploying Gardener Extensions Gardener is an extensible system that does not contain the logic for provider-specific things like DNS management, cloud infrastructures, network plugins, operating system configs, and many more. You have to install extension controllers for these parts. Please consult [the documentation regarding extensions](../extensions/overview.md) to get more information. -## Deploying the Gardener agent (Gardenlet) +## Deploying the Gardener Agent (gardenlet) -Please refer to [this document](./deploy_gardenlet.md) on how to deploy a Gardenlet. \ No newline at end of file +Please refer to [Deploying Gardenlets](./deploy_gardenlet.md) on how to deploy a gardenlet. \ No newline at end of file diff --git a/docs/deployment/version_skew_policy.md b/docs/deployment/version_skew_policy.md index 8840a957979..1ed146d3532 100644 --- a/docs/deployment/version_skew_policy.md +++ b/docs/deployment/version_skew_policy.md @@ -6,12 +6,12 @@ This document describes the maximum version skew supported between various Garde Gardener versions are expressed as `x.y.z`, where `x` is the major version, `y` is the minor version, and `z` is the patch version, following Semantic Versioning terminology. -The Gardener project maintains release branches for the most recent three minor releases. +The Gardener project maintains release branches for the three most recent minor releases. Applicable fixes, including security fixes, may be backported to those three release branches, depending on severity and feasibility. Patch releases are cut from those branches at a regular cadence, plus additional urgent releases when required. -For more information, see [this document](../development/process.md#releases). +For more information, see the [Releases document](../development/process.md#releases). ### Supported Version Skew @@ -45,26 +45,26 @@ This section describes the order in which components must be upgraded to transit #### gardener-apiserver -Pre-requisites: +Prerequisites: -- In a single-instance setup, the existing `gardener-apiserver` instance is **1.37** -- In a multi-instance setup, all `gardener-apiserver` instances are at **1.37** or **1.38** (this ensures maximum skew of 1 minor version between the oldest and newest `gardener-apiserver` instance) -- The `gardener-controller-manager`, `gardener-scheduler`, `gardener-admission-controller`, and `gardenlet` instances that communicate with this `gardener-apiserver` are at version **1.37** (this ensures they are not newer than the existing API server version and are within 1 minor version of the new API server version) +- In a single-instance setup, the existing `gardener-apiserver` instance is **1.37**. +- In a multi-instance setup, all `gardener-apiserver` instances are at **1.37** or **1.38** (this ensures maximum skew of 1 minor version between the oldest and newest `gardener-apiserver` instance). +- The `gardener-controller-manager`, `gardener-scheduler`, `gardener-admission-controller`, and `gardenlet` instances that communicate with this `gardener-apiserver` are at version **1.37** (this ensures they are not newer than the existing API server version and are within 1 minor version of the new API server version). -Action: +Actions: -- Upgrade `gardener-apiserver` to **1.38** +- Upgrade `gardener-apiserver` to **1.38**. #### gardener-controller-manager, gardener-scheduler, gardener-admission-controller, gardenlet -Pre-requisites: +Prerequisites: - The `gardener-apiserver` instances these components communicate with are at **1.38** (in multi-instance setups in which these components can communicate with any `gardener-apiserver` instance in the cluster, all `gardener-apiserver` instances must be upgraded before upgrading these components) -Action: +Actions: - Upgrade `gardener-controller-manager`, `gardener-scheduler`, `gardener-admission-controller`, and `gardenlet` to **1.38** ## Supported Kubernetes Versions -Please refer to [this document](../usage/supported_k8s_versions.md). +Please refer to [Supported Kubernetes Versions](../usage/supported_k8s_versions.md). \ No newline at end of file