Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Fixed broken links - 2024.12 #563

Merged
merged 1 commit into from
Dec 18, 2024
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
Original file line number Diff line number Diff line change
Expand Up @@ -355,7 +355,7 @@ spec:

#### On `spec.controlPlane.highAvailability.failureTolerance.type`

If set, determines the degree of failure tolerance for your control plane. `zone` is preferred, but only available if your control plane resides in a region with 3+ zones. See [above](#control-plane) and the [docs](https://github.com/gardener/gardener/blob/master/docs/usage/shoot_high_availability.md).
If set, determines the degree of failure tolerance for your control plane. `zone` is preferred, but only available if your control plane resides in a region with 3+ zones. See [above](#control-plane) and the [docs](https://github.com/gardener/gardener/blob/master/docs/usage/high-availability/shoot_high_availability.md).

#### On `spec.kubernetes.kubeAPIServer.defaultUnreachableTolerationSeconds` and `defaultNotReadyTolerationSeconds`

Expand Down Expand Up @@ -394,7 +394,7 @@ This configures horizontal pod autoscaling in Gardener-managed clusters. See [ab

#### On `spec.kubernetes.verticalPodAutoscaler...`

This configures vertical pod autoscaling in Gardener-managed clusters. See [above](#resources-vertical-scaling) and the [docs](https://github.com/kubernetes/autoscaler/blob/master/vertical-pod-autoscaler/FAQ.md) for the detailed fields.
This configures vertical pod autoscaling in Gardener-managed clusters. See [above](#resources-vertical-scaling) and the [docs](https://github.com/kubernetes/autoscaler/blob/master/vertical-pod-autoscaler/docs/faq.md) for the detailed fields.

#### On `spec.kubernetes.clusterAutoscaler...`

Expand Down
6 changes: 3 additions & 3 deletions website/documentation/getting-started/ca-components.md
Original file line number Diff line number Diff line change
Expand Up @@ -113,7 +113,7 @@ But most importantly, it is pre-defined and not configurable by the end user.

Therefore, the "external" domain name exists. It is either a user owned domain or can be pre-defined for a Gardener landscape. It is used by any end user accessing the cluster's API server.

For more information, see [Contract: DNSRecord Resources](https://github.com/gardener/gardener/blob/master/docs/extensions/dnsrecord.md).
For more information, see [Contract: DNSRecord Resources](https://github.com/gardener/gardener/blob/master/docs/extensions/resources/dnsrecord.md).

## Features and Observability

Expand Down Expand Up @@ -184,9 +184,9 @@ When calico is failing on a node, no new pods can start there as they don't get

![kube-system-namespace-3](./images/kube-system-namespace-3.png)

For a normal service in Kubernetes, a cluster-internal DNS record that resolves to the service's ClusterIP address is being created. In Gardener (similar to most other Kubernetes offerings) CoreDNS takes care of this aspect. To reduce the load when it comes to upstream DNS queries, Gardener deploys a DNS cache to each node by default. It will also forward queries outside the cluster's search domain directly to the upstream DNS server. For more information, see [NodeLocalDNS Configuration](https://github.com/gardener/gardener/blob/master/docs/usage/node-local-dns.md) and [DNS autoscaling](https://github.com/gardener/gardener/blob/master/docs/usage/dns-autoscaling.md).
For a normal service in Kubernetes, a cluster-internal DNS record that resolves to the service's ClusterIP address is being created. In Gardener (similar to most other Kubernetes offerings) CoreDNS takes care of this aspect. To reduce the load when it comes to upstream DNS queries, Gardener deploys a DNS cache to each node by default. It will also forward queries outside the cluster's search domain directly to the upstream DNS server. For more information, see [NodeLocalDNS Configuration](https://github.com/gardener/gardener/blob/master/docs/usage/networking/node-local-dns.md) and [DNS autoscaling](https://github.com/gardener/gardener/blob/master/docs/usage/autoscaling/dns-autoscaling.md).

In addition to this optimization, Gardener allows [custom DNS configuration to be added to CoreDNS](https://github.com/gardener/gardener/blob/master/docs/usage/custom-dns-config.md) via a dedicated ConfigMap.
In addition to this optimization, Gardener allows [custom DNS configuration to be added to CoreDNS](https://github.com/gardener/gardener/blob/master/docs/usage/networking/custom-dns-config.md) via a dedicated ConfigMap.

In case this customization is related to non-Kubernetes entities, you may configure the shoot's NodeLocalDNS to forward to CoreDNS instead of upstream (`disableForwardToUpstreamDNS: true`).

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -18,4 +18,4 @@ As part of the control plane, the following components are deployed in the seed
- kube-controller-manager
- gardener-resource-manager
- Logging and monitoring components
- Extension components (to find out if they support workerless shoots, see the [Extensions](https://github.com/gardener/gardener/blob/master/docs/extensions/extension.md#what-is-required-to-register-and-support-an-extension-type) documentation)
- Extension components (to find out if they support workerless shoots, see the [Extensions](https://github.com/gardener/gardener/blob/master/docs/extensions/resources/extension.md#what-is-required-to-register-and-support-an-extension-type) documentation)
2 changes: 1 addition & 1 deletion website/documentation/getting-started/shoots.md
Original file line number Diff line number Diff line change
Expand Up @@ -94,7 +94,7 @@ An alternative is to use an identity provider and issue OIDC tokens.

With the basic configuration options having been introduced, it is time to discuss more possibilities. Gardener offers a variety of options to tweak the control plane's behavior - like defining an event TTL (default 1h), adding an OIDC configuration or activating some feature gates. You could alter the scheduling profile and define an audit logging policy. In addition, the control plane can be configured to run in HA mode (applied on a node or zone level), but keep in mind that once you enable HA, you cannot go back.

In case you have specific requirements for the cluster internal DNS, Gardener offers a plugin mechanism for custom core DNS rules or optimization with node-local DNS. For more information, see [Custom DNS Configuration](https://github.com/gardener/gardener/blob/master/docs/usage/networking/custom-dns-config.md) and [NodeLocalDNS Configuration](https://github.com/gardener/gardener/blob/master/docs/usage/node-local-dns.md).
In case you have specific requirements for the cluster internal DNS, Gardener offers a plugin mechanism for custom core DNS rules or optimization with node-local DNS. For more information, see [Custom DNS Configuration](https://github.com/gardener/gardener/blob/master/docs/usage/networking/custom-dns-config.md) and [NodeLocalDNS Configuration](https://github.com/gardener/gardener/blob/master/docs/usage/networking/node-local-dns.md).

Another category of configuration options is dedicated to the nodes and the infrastructure they are running on. Every provider has their own perks and some of them are exposed. Check the detailed documentation of the relevant extension for your infrastructure provider.

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -45,7 +45,7 @@ For more information, see [Semantic Versioning](http://semver.org/).

Gardener allows to classify versions in the `CloudProfile` as `preview`, `supported`, `deprecated`, or `expired`. During maintenance operations, `preview` versions are excluded from updates, because they’re often recently released versions that haven’t yet undergone thorough testing and may contain bugs or security issues.

For more information, see [Version Classifications](https://github.com/gardener/gardener/blob/master/docs/usage/shoot_versions.md#version-classifications).
For more information, see [Version Classifications](https://github.com/gardener/gardener/blob/master/docs/usage/shoot-operations/shoot_versions.md#version-classifications).

## Let Gardener Manage Your Updates

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -90,8 +90,8 @@ The scalability of `Nodes` is subject to a range of limiting factors. Some of th

**CIDR**:

Upon cluster creation, you have to specify or use the default values for several network segments. There are dedicated CIDRs for services, `Pods`, and `Nodes`. Each defines a range of IP addresses available for the individual resource type. Obviously, the maximum of possible `Nodes` is capped by the CIDR for `Nodes`.
However, there is a second limiting factor, which is the pod CIDR combined with the `nodeCIDRMaskSize`. This mask is used to divide the pod CIDR into smaller subnets, where each blocks gets assigned to a node. With a `/16` pod network and a `/24` nodeCIDRMaskSize, a cluster can scale up to 256 `Nodes`. Please check [Shoot Networking](https://github.com/gardener/gardener/blob/master/docs/usage/shoot_networking.md) for details.
Upon cluster creation, you have to specify or use the default values for several network segments. There are dedicated CIDRs for services, `Pods`, and `Nodes`. Each defines a range of IP addresses available for the individual resource type. Obviously, the maximum of possible `Nodes` is capped by the CIDR for `Nodes`.
However, there is a second limiting factor, which is the pod CIDR combined with the `nodeCIDRMaskSize`. This mask is used to divide the pod CIDR into smaller subnets, where each blocks gets assigned to a node. With a `/16` pod network and a `/24` nodeCIDRMaskSize, a cluster can scale up to 256 `Nodes`. Please check [Shoot Networking](https://github.com/gardener/gardener/blob/master/docs/usage/networking/shoot_networking.md) for details.

Even though a `/24` nodeCIDRMaskSize translates to a theoretical 256 pod IP addresses per `Node`, the `maxPods` setting should be less than 1/2 of this value. This gives the system some breathing room for churn and minimizes the risk for strange effects like mis-routed packages caused by immediate re-use of IPs.

Expand Down Expand Up @@ -130,7 +130,7 @@ While webhooks provide powerful means to manage a cluster, they are equally powe

Hence, you have to ensure proper sizing, quick processing time, and availability of the webhook serving `Pods` when deploying webhooks. Please consult Dynamic Admission Control ([Availability](https://kubernetes.io/docs/reference/access-authn-authz/extensible-admission-controllers/#availability) and [Timeouts](https://kubernetes.io/docs/reference/access-authn-authz/extensible-admission-controllers/#timeouts) sections) for details. You should also be aware of the time added to any request that has to go through a webhook, as the `kube-apiserver` sends the request for mutation / validation to another pod and waits for the response. The more resources being subject to an external webhook, the more likely this will become a bottleneck when having a high churn rate on resources. Within the Gardener monitoring stack, you can check the extra time per webhook via the "API Server (Admission Details)" dashboard, which has a panel for "Duration per Webhook".

In Gardener, any webhook timeout should be less than 15 seconds. Due to the separation of Kubernetes data-plane (shoot) and control-plane (seed) in Gardener, the extra hop from `kube-apiserver` (control-plane) to webhook (data-plane) is more expensive. Please check [Shoot Status](https://github.com/gardener/gardener/blob/master/docs/usage/shoot_status.md) for more details.
In Gardener, any webhook timeout should be less than 15 seconds. Due to the separation of Kubernetes data-plane (shoot) and control-plane (seed) in Gardener, the extra hop from `kube-apiserver` (control-plane) to webhook (data-plane) is more expensive. Please check [Shoot Status](https://github.com/gardener/gardener/blob/master/docs/usage/shoot/shoot_status.md) for more details.

### Custom Resource Definitions

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -18,7 +18,7 @@ There are a few potential reasons why nodes can be removed:
- the K8s/OS version
- changing machine types

Helpful information can be obtained by using the logging stack. See [Logging Stack](https://github.com/gardener/gardener/blob/master/docs/usage/logging.md) for how to utilize the logging information in Gardener.
Helpful information can be obtained by using the logging stack. See [Logging Stack](https://github.com/gardener/gardener/blob/master/docs/usage/observability/logging.md) for how to utilize the logging information in Gardener.

## Find Out Whether the Node Was `unhealthy`

Expand Down