diff --git a/versioned_docs/version-0.12/developer-guide/development.md b/versioned_docs/version-0.12/developer-guide/development.md new file mode 100644 index 00000000..13a22932 --- /dev/null +++ b/versioned_docs/version-0.12/developer-guide/development.md @@ -0,0 +1,58 @@ +--- +sidebar_position: 3 +--- + +# Development setup + +## Prerequisites: + +- [kind](https://kind.sigs.k8s.io/) +- [helm](https://helm.sh/) +- [tilt](https://tilt.dev/) + +## Create a local development environment + +1. Clone the [Rancher Turtles](https://github.com/rancher/turtles) repository locally + +2. Create **tilt-settings.yaml**: + +```yaml +{ + "k8s_context": "k3d-rancher-test", + "default_registry": "ghcr.io/turtles-dev", + "debug": { + "turtles": { + "continue": true, + "port": 40000 + } + } +} +``` + +3. Open a terminal in the root of the Rancher Turtles repository +4. Run the following: + +```bash +make dev-env + +# Or if you want to use a custom hostname for Rancher +RANCHER_HOSTNAME=my.customhost.dev make dev-env +``` + +5. When tilt has started, open a new terminal and start ngrok or inlets + +```bash +kubectl port-forward --namespace cattle-system svc/rancher 10000:443 +ngrok http https://localhost:10000 +``` + +## What happens when you run `make dev-env`? + +1. A [kind](https://kind.sigs.k8s.io/) cluster is created with the following [configuration](https://github.com/rancher/turtles/blob/main/scripts/kind-cluster-with-extramounts.yaml). +1. [Cluster API Operator](../developer-guide/install_capi_operator.md) is installed using helm, which includes: + - Core Cluster API controller + - Kubeadm Bootstrap and Control Plane Providers + - Docker Infrastructure Provider + - Cert manager +1. `Rancher manager` is installed using helm. +1. `tilt up` is run to start the development environment. diff --git a/versioned_docs/version-0.12/developer-guide/install_capi_operator.md b/versioned_docs/version-0.12/developer-guide/install_capi_operator.md new file mode 100644 index 00000000..01c99dca --- /dev/null +++ b/versioned_docs/version-0.12/developer-guide/install_capi_operator.md @@ -0,0 +1,112 @@ +--- +sidebar_position: 2 +--- + +# Installing Cluster API Operator + +:::caution +Installing Cluster API Operator by following this page (without it being a Helm dependency to Rancher Turtles) is not a recommended installation method and intended only for local development purposes. +::: + +This section describes how to install `Cluster API Operator` in the Kubernetes cluster. + +## Installing Cluster API (CAPI) and providers + +`CAPI` and desired `CAPI` providers could be installed using the helm-based installation for [`Cluster API Operator`](https://github.com/kubernetes-sigs/cluster-api-operator) or as a helm dependency for the `Rancher Turtles`. + +### Install manually with Helm (alternative) +To install `Cluster API Operator` with version `1.7.3` of the `CAPI` + `Docker` provider using helm, follow these steps: + +1. Add the Helm repository for the `Cluster API Operator`: +```bash +helm repo add capi-operator https://kubernetes-sigs.github.io/cluster-api-operator +helm repo add jetstack https://charts.jetstack.io + +``` +2. Update the Helm repository: +```bash +helm repo update +``` +3. Install the Cert-Manager: +```bash +helm install cert-manager jetstack/cert-manager --namespace cert-manager --create-namespace --set installCRDs=true +``` +4. Install the `Cluster API Operator`: +```bash +helm install capi-operator capi-operator/cluster-api-operator + --create-namespace -n capi-operator-system + --set infrastructure=docker:v1.7.3 + --set core=cluster-api:v1.7.3 + --timeout 90s --wait # Core Cluster API with kubeadm bootstrap and control plane providers will also be installed +``` + +:::note +`cert-manager` is a hard requirement for `CAPI` and `Cluster API Operator`* +::: + +To provide additional environment variables, enable feature gates, or supply cloud credentials, similar to `clusterctl` [common provider](https://cluster-api.sigs.k8s.io/user/quick-start#initialization-for-common-providers), variables secret with `name` and a `namespace` of the secret could be specified for the `Cluster API Operator` as shown below. + +```bash +helm install capi-operator capi-operator/cluster-api-operator + --create-namespace -n capi-operator-system + --set infrastructure=docker:v1.7.3 + --set core=cluster-api:v1.7.3 + --timeout 90s + --secret-name + --wait +``` + +Example secret data: +```yaml +apiVersion: v1 +kind: Secret +metadata: + name: variables + namespace: default +type: Opaque +stringData: + CLUSTER_TOPOLOGY: "true" + EXP_CLUSTER_RESOURCE_SET: "true" +``` + +To select more than one desired provider to be installed together with the `Cluster API Operator`, the `--infrastructure` flag can be specified with multiple provider names separated by a comma. For example: + +```bash +helm install ... --set infrastructure="docker:v1.7.3;aws:v2.6.1" +``` + +The `infrastructure` flag is set to `docker:v1.7.3;aws:v2.6.1`, representing the desired provider names. This means that the `Cluster API Operator` will install and manage multiple providers, `Docker` and `AWS`, with versions `v1.7.3` and `v2.6.1` respectively. + +The cluster is now ready to install Rancher Turtles. The default behavior when installing the chart is to install Cluster API Operator as a Helm dependency. Since we decided to install it manually before installing Rancher Turtles, the feature `cluster-api-operator.enabled` must be explicitly disabled as otherwise it would conflict with the existing installation. You can refer to [Install Rancher Turtles without Cluster API Operator](../developer-guide/install_capi_operator.md#install-rancher-turtles-without-cluster-api-operator-as-a-helm-dependency) to see next steps. + +:::tip +For more fine-grained control of the providers and other components installed with CAPI, see the [Add the infrastructure provider](../tasks/capi-operator/add_infrastructure_provider.md) section. +::: +'' + +### Install Rancher Turtles without `Cluster API Operator` as a Helm dependency + +:::note +This option is only suitable for development purposes and not recommended for production environments. +::: + +The `rancher-turtles` chart is available in https://rancher.github.io/turtles and this Helm repository must be added before proceeding with the installation: + +```bash +helm repo add turtles https://rancher.github.io/turtles +helm repo update +``` + +and then it can be installed into the `rancher-turtles-system` namespace with: + +```bash +helm install rancher-turtles turtles/rancher-turtles --version v0.12.0 + -n rancher-turtles-system + --set cluster-api-operator.enabled=false + --set cluster-api-operator.cluster-api.enabled=false + --create-namespace --wait + --dependency-update +``` + +As you can see, we are telling Helm to ignore installing `cluster-api-operator` as a dependency. + diff --git a/versioned_docs/version-0.12/developer-guide/intro.md b/versioned_docs/version-0.12/developer-guide/intro.md new file mode 100644 index 00000000..02880f46 --- /dev/null +++ b/versioned_docs/version-0.12/developer-guide/intro.md @@ -0,0 +1,7 @@ +--- +sidebar_position: 0 +--- + +# Introduction + +Everything you need to know about developing Rancher Turtles. diff --git a/versioned_docs/version-0.12/getting-started/air-gapped-environment.md b/versioned_docs/version-0.12/getting-started/air-gapped-environment.md new file mode 100644 index 00000000..7216f7f8 --- /dev/null +++ b/versioned_docs/version-0.12/getting-started/air-gapped-environment.md @@ -0,0 +1,125 @@ +--- +sidebar_position: 3 +--- + +# Air-gapped environment + +Rancher Turtles provides support for an air-gapped environment out-of-the-box by leveraging features of the Cluster API Operator, the required dependency for installing Rancher Turtles. + +To provision and configure Cluster API providers, Turtles uses the **CAPIProvider** resource to allow managing Cluster API Operator manifests in a declarative way. Every field provided by the upstream CAPI Operator resource for the desired `spec.type` is also available in the `spec` of the **CAPIProvider** resouce. + +To install Cluster API providers in an air-gapped environment the following will need to be done: + +1. Configure the Cluster API Operator for an air-gapped environment: + - The operator chart will be fetched and stored as a part of the Turtles chart. + - Provide image overrides for the operator from an accessible image repository. +2. Configure Cluster API providers for an air-gapped environment: + - Provide fetch configuration for each provider from an accessible location (e.g., an internal github/gitlab server) or from pre-created ConfigMaps within the cluster. + - Provide image overrides for each provider to pull images from an accessible image repository. +3. Configure Rancher Turtles for an air-gapped environment: + - Collect and publish Rancher Turtles images and publish to the private registry. [Example of cert-manager installation for the reference](https://ranchermanager.docs.rancher.com/getting-started/installation-and-upgrade/other-installation-methods/air-gapped-helm-cli-install/publish-images#2-collect-the-cert-manager-image). + - Provide fetch configuration and image values for `core` and `caprke2` providers in [values.yaml](../reference-guides/rancher-turtles-chart/values.md#cluster-api-operator-values). + - Provider image value for the Cluster API Operator helm chart dependency in [values.yaml](https://github.com/kubernetes-sigs/cluster-api-operator/blob/main/hack/charts/cluster-api-operator/values.yaml#L26). Image values specified with the cluster-api-operator key will be passed along to the Cluster API Operator. + +## Example Usage + +As an admin, I need to fetch the vSphere provider (CAPV) components from within the cluster because I am working in an air-gapped environment. + +In this example, there is a ConfigMap in the `capv-system` namespace that defines the components and metadata of the provider. It can be created manually or by running the following commands: + +```bash +# Get the file contents from the GitHub release +curl -L https://github.com/kubernetes-sigs/cluster-api-provider-vsphere/releases/download/v1.8.5/infrastructure-components.yaml -o components.yaml +curl -L https://github.com/kubernetes-sigs/cluster-api-provider-vsphere/releases/download/v1.8.5/metadata.yaml -o metadata.yaml + +# Create the configmap from the files +kubectl create configmap v1.8.5 --namespace=capv-system --from-file=components=components.yaml --from-file=metadata=metadata.yaml --dry-run=client -o yaml > configmap.yaml + +``` + +This command example would need to be adapted to the provider and version you want to use. The resulting config map will look similar to the example below: + +```yaml +apiVersion: v1 +kind: ConfigMap +metadata: + labels: + provider-components: vsphere + name: v1.8.5 + namespace: capv-system +data: + components: | + # Components for v1.8.5 YAML go here + metadata: | + # Metadata information goes here +``` + +A **CAPIProvider** resource will need to be created to represent the vSphere infrastructure provider. It will need to be configured with a `fetchConfig`. The label selector allows the operator to determine the available versions of the vSphere provider and the Kubernetes resources that need to be deployed (i.e. contained within ConfigMaps which match the label selector). + +Since the provider's version is marked as `v1.8.5`, the operator uses the components information from the ConfigMap with matching label to install the vSphere provider. + +```yaml +apiVersion: turtles-capi.cattle.io/v1alpha1 +kind: CAPIProvider +metadata: + name: vsphere + namespace: capv-system +spec: + name: vsphere + type: infrastructure + version: v1.8.5 + configSecret: + name: vsphere-variables + fetchConfig: + selector: + matchLabels: + provider-components: vsphere + deployment: + containers: + - name: manager + imageUrl: "gcr.io/myregistry/capv-controller:v1.8.5-foo" + variables: + CLUSTER_TOPOLOGY: "true" + EXP_CLUSTER_RESOURCE_SET: "true" + EXP_MACHINE_POOL: "true" +``` + +Additionally the **CAPIProvider** overrides the container image to use for the provider using the `deployment.containers[].imageUrl` field. This allows the operator to pull the image from a registry within the air-gapped environment. + +### Situation when manifests do not fit into ConfigMap + +There is a limit on the [maximum size](https://kubernetes.io/docs/concepts/configuration/configmap/#motivation) of a ConfigMap - 1MiB. If the manifests do not fit into this size, Kubernetes will generate an error and provider installation fail. To avoid this, you can archive the manifests and put them in the ConfigMap that way. + +For example, you have two files: `components.yaml` and `metadata.yaml`. To create a working config map you need: + +1. Archive components.yaml using `gzip` cli tool + +```sh +gzip -c components.yaml > components.gz +``` + +2. Create a ConfigMap manifest from the archived data + +```sh +kubectl create configmap v1.8.5 --namespace=capv-system --from-file=components=components.gz --from-file=metadata=metadata.yaml --dry-run=client -o yaml > configmap.yaml +``` + +3. Edit the file by adding "provider.cluster.x-k8s.io/compressed: true" annotation + +```sh +yq eval -i '.metadata.annotations += {"provider.cluster.x-k8s.io/compressed": "true"}' configmap.yaml +``` + +**Note**: without this annotation operator won't be able to determine if the data is compressed or not. + +4. Add labels that will be used to match the ConfigMap in `fetchConfig` section of the provider + +```sh +yq eval -i '.metadata.labels += {"my-label": "label-value"}' configmap.yaml +``` + +5. Create a ConfigMap in your kubernetes cluster using kubectl + +```sh +kubectl create -f configmap.yaml +``` \ No newline at end of file diff --git a/versioned_docs/version-0.12/getting-started/cluster-class/create_cluster.md b/versioned_docs/version-0.12/getting-started/cluster-class/create_cluster.md new file mode 100644 index 00000000..92dcaad9 --- /dev/null +++ b/versioned_docs/version-0.12/getting-started/cluster-class/create_cluster.md @@ -0,0 +1,139 @@ +--- +sidebar_position: 2 +--- + +# Create a cluster using Fleet + +This section will guide you through creating a cluster that utilizes ClusterClass using a GitOps workflow with Fleet. + +:::note +This guide uses the [examples repository](https://github.com/rancher-sandbox/rancher-turtles-fleet-example/tree/clusterclass). +::: + +## Prerequisites + +- Rancher Manager cluster with Rancher Turtles installed +- Cluster API providers installed for your scenario - we'll be using the Docker infrastructure and Kubeadm bootstrap/control plane providers in these instructions - see [Initialization for common providers](https://cluster-api.sigs.k8s.io/user/quick-start.html#initialization-for-common-providers) +- The **ClusterClass** feature enabled - see [the introduction](./intro.md) + +## Configure Rancher Manager + +The clusterclass and cluster definitions will be imported into the Rancher Manager cluster (which is also acting as a Cluster API management cluster) using the **Continuous Delivery** feature (which uses Fleet). + +The guide will apply the manifests using a 2-step process. However, this isn't required and they could be combined into one step. + +There are 2 options to provide the configuration. The first is using the Rancher Manager UI and the second is by applying some YAML to your cluster. Both are covered below. + +### Import ClusterClass Definitions + +#### Using the Rancher Manager UI + +1. Go to Rancher Manager +2. Select **Continuos Delivery** from the menu: +3. Select **fleet-local** as the namespace from the top right +4. Select **Git Repos** from the sidebar +5. Click **Add Repository** +6. Enter **classes** as the name +7. Get the **HTTPS** clone URL from your git repo +8. Add the URL into the **Repository URL** field +9. Change the branch name to **clusterclass** +10. Click **Add Path** +11. Enter `/classes` +12. Click **Next** +13. Click **Create** +14. Click on the **clusters** name +15. Watch the resources become ready + +### Using kubectl + +1. Get the **HTTPS** clone URL from your git repo +2. Create a new file called **repo.yaml** +3. Add the following contents to the new file: + +```yaml +apiVersion: fleet.cattle.io/v1alpha1 +kind: GitRepo +metadata: + name: classes + namespace: fleet-local +spec: + branch: clusterclass + repo: https://github.com/rancher-sandbox/rancher-turtles-fleet-example.git + paths: + - /classes + targets: [] +``` + +4. Apply the file to the Rancher Manager cluster using **kubectl**: + +```bash +kubectl apply -f repo.yaml +``` + +5. Go to Rancher Manager +6. Select **Continuos Delivery** from the side bar +7. Select **fleet-local** as the namespace from the top right +8. Select **Git Repos** from the sidebar +9. Click on the **clusters** name +10. Watch the resources become ready +11. Select **Cluster Management** from the menu +12. Check your cluster has been imported + +### Import Cluster Definitions + +Now the classes have been imported its possible to use them with cluster definitions. + +#### Using the Rancher Manager UI + +1. Go to Rancher Manager +2. Select **Continuos Delivery** from the menu: +3. Select **fleet-local** as the namespace from the top right +4. Select **Git Repos** from the sidebar +5. Click **Add Repository** +6. Enter **clusters** as the name +7. Get the **HTTPS** clone URL from your git repo +8. Add the URL into the **Repository URL** field +9. Change the branch name to **clusterclass** +10. Click **Add Path** +11. Enter `/clusters` +12. Click **Next** +13. Click **Create** +14. Click on the **clusters** name +15. Watch the resources become ready +16. Select **Cluster Management** from the menu +17. Check your cluster has been imported + +### Using kubectl + +1. Get the **HTTPS** clone URL from your git repo +2. Create a new file called **repo.yaml** +3. Add the following contents to the new file: + +```yaml +apiVersion: fleet.cattle.io/v1alpha1 +kind: GitRepo +metadata: + name: clusters + namespace: fleet-local +spec: + branch: clusterclass + repo: https://github.com/rancher-sandbox/rancher-turtles-fleet-example.git + paths: + - /clusters + targets: [] +``` + +4. Apply the file to the Rancher Manager cluster using **kubectl**: + +```bash +kubectl apply -f repo.yaml +``` + +5. Go to Rancher Manager +6. Select **Continuos Delivery** from the side bar +7. Select **fleet-local** as the namespace from the top right +8. Select **Git Repos** from the sidebar +9. Click on the **classes** name +10. Watch the resources become ready +11. Select **Cluster Management** from the menu +12. Check your cluster has been imported diff --git a/versioned_docs/version-0.12/getting-started/cluster-class/intro.md b/versioned_docs/version-0.12/getting-started/cluster-class/intro.md new file mode 100644 index 00000000..485e1228 --- /dev/null +++ b/versioned_docs/version-0.12/getting-started/cluster-class/intro.md @@ -0,0 +1,41 @@ +--- +sidebar_position: 1 +--- + +# Introduction + +In this section we cover using **ClusterClass** with Rancher Turtles. + +:::caution +ClusterClass is an experiment feature of Cluster API. As with any experimental feature it should be used with caution as it may be unreliable. All experimental features are not subject to any compatibility or deprecation promise. +::: + +## Pre-requisities + +To use ClusterClass it needs to be enabled for core Cluster API and any provider that supports it. This is done by setting the `CLUSTER_TOPOLOGY` variable to `true`. + +The Rancher Turtles Helm chart will set this variable by default when its installed. However, when enabling additional providers you will have to ensure `CLUSTER_TOPOLGY` is set in the providers secret. For example: + +```yaml +apiVersion: v1 +kind: Secret +metadata: + name: capd-variables + namespace: default +type: Opaque +stringData: + CLUSTER_TOPOLOGY: "true" +``` + +Which can then be referenced from a provider declaration. For example: + +```yaml +apiVersion: operator.cluster.x-k8s.io/v1alpha1 +kind: InfrastructureProvider +metadata: + name: docker + namespace: capd-system +spec: + secretName: capd-variables + secretNamespace: default +``` diff --git a/versioned_docs/version-0.12/getting-started/create-first-cluster/gh_clone.png b/versioned_docs/version-0.12/getting-started/create-first-cluster/gh_clone.png new file mode 100644 index 00000000..1909eba2 Binary files /dev/null and b/versioned_docs/version-0.12/getting-started/create-first-cluster/gh_clone.png differ diff --git a/versioned_docs/version-0.12/getting-started/create-first-cluster/intro.md b/versioned_docs/version-0.12/getting-started/create-first-cluster/intro.md new file mode 100644 index 00000000..a634c392 --- /dev/null +++ b/versioned_docs/version-0.12/getting-started/create-first-cluster/intro.md @@ -0,0 +1,12 @@ +--- +sidebar_position: 1 +--- + +# Introduction + +Everything you need to know about creating and importing your first CAPI cluster with Rancher Turtles. + +Choose one of the following options: + +- [If you are using Fleet](using_fleet.md) +- [If you want to use kubectl](using_kubectl.md) diff --git a/versioned_docs/version-0.12/getting-started/create-first-cluster/ns.png b/versioned_docs/version-0.12/getting-started/create-first-cluster/ns.png new file mode 100644 index 00000000..b9b7ff41 Binary files /dev/null and b/versioned_docs/version-0.12/getting-started/create-first-cluster/ns.png differ diff --git a/versioned_docs/version-0.12/getting-started/create-first-cluster/sidebar.png b/versioned_docs/version-0.12/getting-started/create-first-cluster/sidebar.png new file mode 100644 index 00000000..a7371d20 Binary files /dev/null and b/versioned_docs/version-0.12/getting-started/create-first-cluster/sidebar.png differ diff --git a/versioned_docs/version-0.12/getting-started/create-first-cluster/using_fleet.md b/versioned_docs/version-0.12/getting-started/create-first-cluster/using_fleet.md new file mode 100644 index 00000000..0fd00327 --- /dev/null +++ b/versioned_docs/version-0.12/getting-started/create-first-cluster/using_fleet.md @@ -0,0 +1,145 @@ +--- +sidebar_position: 3 +--- + +# Create & import your first cluster using Fleet + +This section will guide you through creating your first cluster and importing it into Rancher Manager using a GitOps workflow with Fleet. + +## Prerequisites + +- Rancher Manager cluster with Rancher Turtles installed +- Cluster API providers installed for your scenario - we'll be using the [Docker infrastructure](https://github.com/kubernetes-sigs/cluster-api/tree/main/test/infrastructure/docker) and [RKE2 bootstrap/control plane](https://github.com/rancher-sandbox/cluster-api-provider-rke2) providers in these instructions - see [Initialization for common providers using Turtles' `CAPIProvider`](../../tasks/capi-operator/capiprovider_resource.md) +- **clusterctl** CLI - see the [releases](https://github.com/kubernetes-sigs/cluster-api/releases) + +## Create your cluster definition + +The **clusterctl** CLI can be used to generate the YAML for a cluster. When you run `clusterctl generate cluster`, it will connect to the management cluster to see what infrastructure providers have been installed. Also, it will take care of replacing any tokens in the chosen template (a.k.a flavour) with values from environment variables. + +Alternatively, you can craft the YAML for your cluster manually. If you decide to do this then you can use the **templates** that infrastructure providers publish as part of their releases. For example, the AWS provider [publishes files](https://github.com/kubernetes-sigs/cluster-api-provider-aws/releases/tag/v2.2.1) prefixed with **cluster-template** that can be used as a base. You will need to replace any tokens yourself or by using clusterctl (e.g. `clusterctl generate cluster test1 --from https://github.com/kubernetes-sigs/cluster-api-provider-aws/releases/download/v2.2.1/cluster-template-eks.yaml > cluster.yaml`). + +:::tip +To maintain proper resource management and avoid accidental deletion of custom resources managed outside of Helm during Helm operations, include the `helm.sh/resource-policy": keep` annotation in the top-level CAPI kinds within your cluster manifests. +::: + +:::note +This guide does not use ClusterClass. Templates that use ClusterClass will require that the experimental feature be enabled. +::: + +To generate the YAML for the cluster do the following (assuming the Docker infrastructure provider is being used): + +1. Open a terminal and run the following: + +```bash +export CONTROL_PLANE_MACHINE_COUNT=1 +export WORKER_MACHINE_COUNT=1 +export KUBERNETES_VERSION=v1.30.0 + +clusterctl generate cluster cluster1 \ +--from https://raw.githubusercontent.com/rancher-sandbox/rancher-turtles-fleet-example/templates/docker-rke2.yaml \ +> cluster1.yaml +``` + +2. View **cluster1.yaml** to ensure there are no tokens. You can make any changes you want as well. + +:::tip +The Cluster API quickstart guide contains more detail. Read the steps related to this section [here](https://cluster-api.sigs.k8s.io/user/quick-start.html#required-configuration-for-common-providers). +::: + +## Create your repo for Fleet + +1. Create a new git repository (this guide uses GitHub) +2. Create a new folder called **clusters** +3. Move the **cluster1.yaml** file you generated in the last section to the **clusters** folder. +4. Create a file called **fleet.yaml** in the root and add the following contents + +```yaml +namespace: default +``` + +5. Commit the changes + +:::note +The **fleet.yaml** is used to specify configuration options for fleet (see [docs](https://fleet.rancher.io/ref-fleet-yaml) for further details). In this instance its declaring that the cluster definitions should be added to the **default** namespace +::: + +After the described steps there will be a repository created structure similar to the example: [https://github.com/rancher-sandbox/rancher-turtles-fleet-example] + +## Mark Namespace for auto-import + +To automatically import a CAPI cluster into Rancher Manager there are 2 options: + +1. label a namespace so all clusters contained in it are imported. +2. label an individual cluster definition so that it's imported. + +In both cases the label is `cluster-api.cattle.io/rancher-auto-import`. + +This walkthrough will use the first option of importing all clusters in a namespace. + +1. Open a terminal +2. Label the default namespace in your Rancher Manager cluster: + +```bash +kubectl label namespace default cluster-api.cattle.io/rancher-auto-import=true +``` + +## Configure Rancher Manager + +Now the cluster definitions are committed to a git repository they can be used to provision the clusters. To do this they will need to be imported into the Rancher Manager cluster (which is also acting as a Cluster API management cluster) using the **Continuous Delivery** feature (which uses Fleet). + +There are 2 options to provide the configuration. The first is using the Rancher Manager UI and the second is by applying some YAML to your cluster. Both are covered below. + +### Using the Rancher Manager UI + +1. Go to Rancher Manager +2. Select **Continuos Delivery** from the menu: +![sidebar](sidebar.png) +3. Select **fleet-local** as the namespace from the top right +![namespace](ns.png) +4. Select **Git Repos** from the sidebar +5. Click **Add Repository** +6. Enter **clusters** as the name +7. Get the **HTTPS** clone URL from your git repo +![git clone url](gh_clone.png) +8. Add the URL into the **Repository URL** field +9. Change the branch name to **main** +10. Click **Next** +11. Click **Create** +12. Click on the **clusters** name +13. Watch the resources become ready +14. Select **Cluster Management** from the menu +15. Check your cluster has been imported + +### Using kubectl + +1. Get the **HTTPS** clone URL from your git repo +2. Create a new file called **repo.yaml** +3. Add the following contents to the new file: + +```yaml +apiVersion: fleet.cattle.io/v1alpha1 +kind: GitRepo +metadata: + name: clusters + namespace: fleet-local +spec: + branch: main + repo: + targets: [] +``` + +4. Apply the file to the Rancher Manager cluster using **kubectl**: + +```bash +kubectl apply -f repo.yaml +``` + +5. Go to Rancher Manager +6. Select **Continuos Delivery** from the side bar +7. Select **fleet-local** as the namespace from the top right +8. Select **Git Repos** from the sidebar +9. Click on the **clusters** name +10. Watch the resources become ready +11. Select **Cluster Management** from the menu +12. Check your cluster has been imported + diff --git a/versioned_docs/version-0.12/getting-started/create-first-cluster/using_kubectl.md b/versioned_docs/version-0.12/getting-started/create-first-cluster/using_kubectl.md new file mode 100644 index 00000000..bb8cf504 --- /dev/null +++ b/versioned_docs/version-0.12/getting-started/create-first-cluster/using_kubectl.md @@ -0,0 +1,58 @@ +--- +sidebar_position: 3 +--- + +# Create & Import Your First Cluster Using kubectl + +This section will guide you through creating your first cluster and importing it into Rancher Manager using kubectl. + +## Prerequisites + +- Rancher Manager cluster with Rancher Turtles installed +- Cluster API providers installed for your scenario - we'll be using the [Docker infrastructure](https://github.com/kubernetes-sigs/cluster-api/tree/main/test/infrastructure/docker) and [RKE2 bootstrap/control plane](https://github.com/rancher-sandbox/cluster-api-provider-rke2) providers in these instructions - see [Initialization for common providers using Turtles' `CAPIProvider`](../../tasks/capi-operator/capiprovider_resource.md) +- **clusterctl** CLI - see the [releases](https://github.com/kubernetes-sigs/cluster-api/releases) + +## Create Your Cluster Definition + +To generate the YAML for the cluster, do the following (assuming the Docker infrastructure provider is being used): + +1. Open a terminal and run the following: + +```bash +export CONTROL_PLANE_MACHINE_COUNT=1 +export WORKER_MACHINE_COUNT=1 +export KUBERNETES_VERSION=v1.30.0 + +clusterctl generate cluster cluster1 \ +--from https://raw.githubusercontent.com/rancher-sandbox/rancher-turtles-fleet-example/templates/docker-rke2.yaml \ +> cluster1.yaml +``` + +2. View **cluster1.yaml** to ensure there are no tokens. You can make any changes you want as well. + +> The Cluster API quickstart guide contains more detail. Read the steps related to this section [here](https://cluster-api.sigs.k8s.io/user/quick-start.html#required-configuration-for-common-providers). + +3. Create the cluster using kubectl + +```bash +kubectl create -f cluster1.yaml +``` + +## Mark Namespace or Cluster for Auto-Import + +To automatically import a CAPI cluster into Rancher Manager, there are 2 options: + +1. Label a namespace so all clusters contained in it are imported. +2. Label an individual cluster definition so that it's imported. + +Labeling a namespace: + +```bash +kubectl label namespace default cluster-api.cattle.io/rancher-auto-import=true +``` + +Labeling an individual cluster definition: + +```bash +kubectl label cluster.cluster.x-k8s.io cluster1 cluster-api.cattle.io/rancher-auto-import=true +``` diff --git a/versioned_docs/version-0.12/getting-started/install-rancher-turtles/deployments-turtles.png b/versioned_docs/version-0.12/getting-started/install-rancher-turtles/deployments-turtles.png new file mode 100644 index 00000000..c165360a Binary files /dev/null and b/versioned_docs/version-0.12/getting-started/install-rancher-turtles/deployments-turtles.png differ diff --git a/versioned_docs/version-0.12/getting-started/install-rancher-turtles/install-turtles-from-ui.gif b/versioned_docs/version-0.12/getting-started/install-rancher-turtles/install-turtles-from-ui.gif new file mode 100644 index 00000000..80f6880e Binary files /dev/null and b/versioned_docs/version-0.12/getting-started/install-rancher-turtles/install-turtles-from-ui.gif differ diff --git a/versioned_docs/version-0.12/getting-started/install-rancher-turtles/using_helm.md b/versioned_docs/version-0.12/getting-started/install-rancher-turtles/using_helm.md new file mode 100644 index 00000000..7c023266 --- /dev/null +++ b/versioned_docs/version-0.12/getting-started/install-rancher-turtles/using_helm.md @@ -0,0 +1,87 @@ +--- +sidebar_position: 4 +--- + +# Via Helm install + +:::caution +In case you need to review the list of prerequisites (including `cert-manager`), you can refer to [this table](../intro.md#prerequisites). +::: + +If you want to manually apply the Helm chart and be in full control of the installation. + +The Cluster API Operator is required for installing Rancher Turtles and will be installed as dependency of the Rancher Turtles Helm chart. + +CAPI Operator allows handling the lifecycle of Cluster API Providers using a declarative approach, extending the capabilities of `clusterctl`. If you want to learn more about it, you can refer to [Cluster API Operator book](https://cluster-api-operator.sigs.k8s.io/). + +:::info +Before [installing Rancher Turtles](#install-rancher-turtles-with-cluster-api-operator-as-a-helm-dependency) in your Rancher environment, Rancher's `embedded-cluster-api` functionality must be disabled. This includes also cleaning up Rancher-specific webhooks that otherwise would conflict with CAPI ones. + +To simplify setting up Rancher for installing Rancher Turtles, the official Rancher Turtles Helm chart includes a `pre-install` hook that applies these changes, making it transparent to the end user: +- Disable the `embedded-cluster-api` feature in Rancher. +- Delete the `mutating-webhook-configuration` and `validating-webhook-configuration` webhooks that are no longer needed. +::: + +If you would like to understand how Rancher Turtles works and what the architecture looks like, you can refer to the [Architecture](../../reference-guides/architecture/intro.md) section. + +:::note +If uninstalling, you can refer to [Uninstalling Rancher Turtles](../uninstall_turtles.md) +::: + +### Install Rancher Turtles with `Cluster API Operator` as a Helm dependency + +The `rancher-turtles` chart is available in https://rancher.github.io/turtles and this Helm repository must be added before proceeding with the installation: + +```bash +helm repo add turtles https://rancher.github.io/turtles +helm repo update +``` + +As mentioned before, installing Rancher Turtles requires the [Cluster API Operator](https://github.com/kubernetes-sigs/cluster-api-operator) and the Helm chart can handle its installation automatically with a minimum set of flags: + +```bash +helm install rancher-turtles turtles/rancher-turtles --version v0.12.0 \ + -n rancher-turtles-system \ + --dependency-update \ + --create-namespace --wait \ + --timeout 180s +``` + +This operation could take a few minutes and, after installing, you can take some time to study the installed controllers, including: +- `rancher-turtles-controller`. +- `capi-operator`. + +:::note +- For a list of Rancher Turtles versions, refer to [Releases page](https://github.com/rancher/turtles/releases). +::: + +This is the basic, recommended configuration, which manages the creation of a secret containing the required CAPI feature flags (`CLUSTER_TOPOLOGY`, `EXP_CLUSTER_RESOURCE_SET` and `EXP_MACHINE_POOL` enabled) in the core provider namespace. These feature flags are required to enable additional Cluster API functionality. + +If you need to override the default behavior and use an existing secret (or add custom environment variables), you can pass the secret name helm flag. In this case, as a user, you are in charge of managing the secret creation and its content, including the minimum required features: `CLUSTER_TOPOLOGY`, `EXP_CLUSTER_RESOURCE_SET` and `EXP_MACHINE_POOL` enabled. + +```bash +helm install ... + # Passing secret name and namespace for additional environment variables + --set cluster-api-operator.cluster-api.configSecret.name= +``` + +The following is an example of a user-managed secret `cluster-api-operator.cluster-api.configSecret.name=variables` with `CLUSTER_TOPOLOGY`, `EXP_CLUSTER_RESOURCE_SET` and `EXP_MACHINE_POOL` feature flags set and an extra custom variable: + +```yaml title="secret.yaml" +apiVersion: v1 +kind: Secret +metadata: + name: variables + namespace: rancher-turtles-system +type: Opaque +stringData: + CLUSTER_TOPOLOGY: "true" + EXP_CLUSTER_RESOURCE_SET: "true" + EXP_MACHINE_POOL: "true" + CUSTOM_ENV_VAR: "false" +``` + +:::info +For detailed information on the values supported by the chart and their usage, refer to [Helm chart options](../../reference-guides/rancher-turtles-chart/values) +::: + diff --git a/versioned_docs/version-0.12/getting-started/install-rancher-turtles/using_rancher_dashboard.md b/versioned_docs/version-0.12/getting-started/install-rancher-turtles/using_rancher_dashboard.md new file mode 100644 index 00000000..0fb2b0e2 --- /dev/null +++ b/versioned_docs/version-0.12/getting-started/install-rancher-turtles/using_rancher_dashboard.md @@ -0,0 +1,56 @@ +--- +sidebar_position: 4 +--- + +# Via Rancher Dashboard + +This is the recommended option for installing Rancher Turtles. + +Via Rancher UI, and just by adding the Turtles repository, we can easily let Rancher take care of the installation and configuration of the Cluster API Extension. + +:::caution +In case you need to review the list of prerequisites (including `cert-manager`), you can refer to [this table](../intro.md#prerequisites). +::: + +:::info +Before [installing Rancher Turtles](./using_helm.md#install-rancher-turtles-with-cluster-api-operator-as-a-helm-dependency) in your Rancher environment, Rancher's `embedded-cluster-api` functionality must be disabled. This includes also cleaning up Rancher-specific webhooks that otherwise would conflict with CAPI ones. + +To simplify setting up Rancher for installing Rancher Turtles, the official Rancher Turtles Helm chart includes a `pre-install` hook that applies these changes, making it transparent to the end user: +- Disable the `embedded-cluster-api` feature in Rancher. +- Delete the `mutating-webhook-configuration` and `validating-webhook-configuration` webhooks that are no longer needed. +::: + +If you would like to understand how Rancher Turtles works and what the architecture looks like, you can refer to the [Architecture](../../reference-guides/architecture/intro.md) section. + +:::note +If uninstalling, you can refer to [Uninstalling Rancher Turtles](../uninstall_turtles.md) +::: + +### Installation + +- From your browser, access Rancher Manager and explore the **local** cluster. +- Using the left navigation panel, go to `Apps` -> `Repositories`. +- Click `Create` to add a new repository. +- Enter the following: + - **Name**: `turtles`. + - **Index URL**: https://rancher.github.io/turtles. +- Wait for the `turtles` repository to have a status of `Active`. +- Go to `Apps` -> `Charts`. +- Filter for `turtles`. +- Click `Rancher Turtles - the Cluster API Extension` +- Click `Install` -> `Next` -> `Install`. + +:::caution +Rancher will select not to install Turtles into a [Project](https://ranchermanager.docs.rancher.com/how-to-guides/new-user-guides/manage-clusters/projects-and-namespaces) by default. Installing Turtles into a Project is not supported and the default configuration `None` should be used to avoid unexpected behavior during installation. +::: + +![install-turtles-from-ui](./install-turtles-from-ui.gif) + +This will use the default values for the Helm chart, which are good for most installations. If your configuration requires overriding some of these defaults, you can either specify the values during installation from Rancher UI or, alternatively, you can opt for the [manual installation via Helm](./using_helm.md). And, if you are interested on learning more about the available values, you can check the [reference guide](../../reference-guides/rancher-turtles-chart/values.md). + +The installation may take a few minutes and, when it finishes, you will be able to see the following new deployments in the cluster: +- `rancher-turtles-system/rancher-turtles-controller-manager` +- `rancher-turtles-system/rancher-turtles-cluster-api-operator` +- `capi-system/capi-controller-manager` + +![deployments-turtles](./deployments-turtles.png) diff --git a/versioned_docs/version-0.12/getting-started/intro.md b/versioned_docs/version-0.12/getting-started/intro.md new file mode 100644 index 00000000..08f89a89 --- /dev/null +++ b/versioned_docs/version-0.12/getting-started/intro.md @@ -0,0 +1,50 @@ +--- +slug: / +sidebar_position: 1 +--- + +# Introduction + +:::warning +Starting with Turtles `v0.9.0`, the process used for importing CAPI clusters into Rancher is now based on a different controller logic. If you are a new user of Turtles, you can proceed normally and simply install the extension. If you have been using previous versions of Turtles and are upgrading to `v0.9.0`, we recommend you take a look at the migration mechanisms and their implications: +- [Automatic migration](../tasks/maintenance/automigrate_to_v3_import.md). +- [Manual migration](../tasks/maintenance/import_controller_upgrade.md) +::: + +Rancher Turtles is a Kubernetes Operator that provides integration between Rancher Manager and Cluster API (CAPI) with the aim of bringing full CAPI support to Rancher. With Rancher Turtles, you can: + +- Automatically import CAPI clusters into Rancher, by installing the Rancher Cluster Agent in CAPI provisioned clusters. +- Configure the CAPI Operator. + +## Demo + +This demo shows how to use the Rancher UI to install Rancher Turtles, create/import a CAPI cluster, and install monitoring on the cluster: + + + +## Prerequisites + +| Name | Version | Details | +| ------------------------ | ---------------------------------------- | ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | +| Kubernetes cluster | `>=1.30.0` | | +| Helm | `>=3.12.0` | | +| Rancher | `>=2.9.0` | Using [helm based](https://ranchermanager.docs.rancher.com/pages-for-subheaders/install-upgrade-on-a-kubernetes-cluster#install-the-rancher-helm-chart) installation on any kubernetes cluster directly or on a newly created [Amazon](https://ranchermanager.docs.rancher.com/getting-started/installation-and-upgrade/install-upgrade-on-a-kubernetes-cluster/rancher-on-amazon-eks), [Azure](https://ranchermanager.docs.rancher.com/getting-started/installation-and-upgrade/install-upgrade-on-a-kubernetes-cluster/rancher-on-aks) or [Google](https://ranchermanager.docs.rancher.com/getting-started/installation-and-upgrade/install-upgrade-on-a-kubernetes-cluster/rancher-on-gke) service based options. | +| Cert-manager | `>=v1.15.2` | Using [helm](https://cert-manager.io/docs/installation/helm/#installing-with-helm) based installation or via [kubectl apply](https://cert-manager.io/docs/installation/#default-static-install). | +| Cluster API Operator | `>=v0.13.0` | Using [Rancher UI](./install-rancher-turtles/using_rancher_dashboard.md) (recommended) or [Helm install](https://github.com/kubernetes-sigs/cluster-api-operator/blob/main/docs/README.md#method-2-use-helm-charts) (for development use cases) | +| Cluster API | `v1.7.3` | | +| Rancher Turtles | `>v0.12.0` | Using [Rancher UI](./install-rancher-turtles/using_rancher_dashboard.md) (recommended) or [Helm install](./install-rancher-turtles/using_helm.md) (for advanced use cases) | + +## Reference Guides + +This section focuses on implementation details including +[architecture](./reference-guides/architecture/intro), how Rancher Turtles integrates with Rancher, and [Helm Chart configuration values](./reference-guides/rancher-turtles-chart/values). + +## Tasks + +In this section we cover additional [operational tasks](./tasks/intro) including basic `CAPIProvider` [installation](./tasks/capi-operator/basic_cluster_api_provider_installation), an [example](./tasks/capi-operator/add_infrastructure_provider) AWS infrastructure provider install using `CAPIProvider`, and [upgrade instructions](./tasks/maintenance/early_adopter_upgrade) for early adopters of Rancher Turtles. + +## Security + +Rancher Turtles meets [SLSA Level 3](https://slsa.dev/spec/v1.0/levels#build-l3) requirements as an appropriate hardened build platform, with consistent build processes, and provenance distribution. This section contains more information on security-related topics: + +- [SLSA](./security/slsa) diff --git a/versioned_docs/version-0.12/getting-started/rancher.md b/versioned_docs/version-0.12/getting-started/rancher.md new file mode 100644 index 00000000..ae26b07b --- /dev/null +++ b/versioned_docs/version-0.12/getting-started/rancher.md @@ -0,0 +1,30 @@ +--- +sidebar_position: 2 +--- + +# Rancher Setup + +## Installing Rancher + +*If you're already running Rancher, you can skip this section and jump to [Install Rancher Turtles](./install-rancher-turtles/using_rancher_dashboard.md).* + +Helm is the recommended way to install `Rancher` in an existing or new Kubernetes cluster. + +:::tip +Make sure to follow one of the official [installation guides](https://ranchermanager.docs.rancher.com/pages-for-subheaders/installation-and-upgrade) for Rancher. +::: + +Here's a minimal configuration example of a command to install `Rancher`: + +```bash +helm install rancher rancher-stable/rancher + --namespace cattle-system + --create-namespace + --set hostname= + --version + --wait +``` + +Replace `` with the actual hostname of your `Rancher` server and use the `--version` option to specify the version of `Rancher` you want to install. In this case, use the [recommended](../getting-started/intro.md#prerequisites) `Rancher` version for `Rancher Turtles`. + +You are now ready to install and use Rancher Turtles! 🎉 diff --git a/versioned_docs/version-0.12/getting-started/uninstall_turtles.md b/versioned_docs/version-0.12/getting-started/uninstall_turtles.md new file mode 100644 index 00000000..ebbb321d --- /dev/null +++ b/versioned_docs/version-0.12/getting-started/uninstall_turtles.md @@ -0,0 +1,43 @@ +--- +sidebar_position: 5 +--- + +# Uninstall Rancher Turtles + +This gives an overview of Rancher Turtles uninstallation process. + +:::caution +When installing Rancher Turtles in your Rancher environment, by default, Rancher Turtles enables the Cluster API Operator cleanup. This includes cleaning up Cluster API Operator specific webhooks and deployments that otherwise cause issues with Rancher provisioning. + +To simplify uninstalling Rancher Turtles (via Rancher Manager or helm command), the official Rancher Turtles Helm chart includes a `post-delete` hook that applies these changes, making it transparent to the end user: +- Delete the `mutating-webhook-configuration` and `validating-webhook-configuration` webhooks that are no longer needed. +- Delete the CAPI `deployments` that are no longer needed. +::: + +To uninstall the Rancher Turtles Extension use the following helm command: + +```bash +helm uninstall -n rancher-turtles-system rancher-turtles --cascade foreground --wait +``` + +This may take a few minutes to complete. + +:::note +Remember that, if you use a different name for the installation or a different namespace, you may need to customize the command for your specific configuration. +::: + +Once uninstalled, Rancher's `embedded-cluster-api` feature must be re-enabled: + +1. Create a `feature.yaml` file, with `embedded-cluster-api` set to true: +```yaml title="feature.yaml" +apiVersion: management.cattle.io/v3 +kind: Feature +metadata: + name: embedded-cluster-api +spec: + value: true +``` +2. Use `kubectl` to apply the `feature.yaml` file to the cluster: +```bash +kubectl apply -f feature.yaml +``` diff --git a/versioned_docs/version-0.12/reference-guides/architecture/30000ft_view.png b/versioned_docs/version-0.12/reference-guides/architecture/30000ft_view.png new file mode 100644 index 00000000..7441fa1b Binary files /dev/null and b/versioned_docs/version-0.12/reference-guides/architecture/30000ft_view.png differ diff --git a/versioned_docs/version-0.12/reference-guides/architecture/components.md b/versioned_docs/version-0.12/reference-guides/architecture/components.md new file mode 100644 index 00000000..e9217ad9 --- /dev/null +++ b/versioned_docs/version-0.12/reference-guides/architecture/components.md @@ -0,0 +1,33 @@ +--- +sidebar_position: 0 +--- + +# Components + +Below is a visual representation of the architecture components of Rancher +Turtles. This diagram illustrates the key elements and their relationships +within the Rancher Turtles system. Understanding these components is essential +for gaining insights into how Rancher leverages Cluster API (CAPI) for cluster +management. + +![overview](30000ft_view.png) + +## Rancher Manager + +This is the core component of Rancher and users can leverage the existing +Explorer feature in the dashboard to access cluster workload details. + +## Rancher Cluster Agent + +The agent is deployed within child clusters, enabling Rancher to import and +establish a connection with these clusters. This connection allows Rancher to +manage the child clusters effectively from within its platform. + +## Rancher Turtles - Rancher CAPI Extension + +It provides integration between CAPI and Rancher while currently supporting the +following functionalities: + +- **Importing CAPI clusters into Rancher:** installing Rancher Cluster Agent in +CAPI provisioned clusters. +- **CAPI Operator Configuration:** Configuration support for the CAPI Operator. diff --git a/versioned_docs/version-0.12/reference-guides/architecture/deployment.md b/versioned_docs/version-0.12/reference-guides/architecture/deployment.md new file mode 100644 index 00000000..86989f4d --- /dev/null +++ b/versioned_docs/version-0.12/reference-guides/architecture/deployment.md @@ -0,0 +1,24 @@ +--- +sidebar_position: 0 +--- + +# Deployment Scenarios + +:::note +Currently Rancher Turtles only supports having Rancher Manager and +Rancher Turtles running in the same cluster. A topology with a separate Rancher +Manager cluster and one/multiple CAPI management cluster/s will be supported in +future releases. +::: + +## Rancher Manager & CAPI Management Combined + +In this topology, both Rancher Manager and Rancher Turtles are deployed to the +same Kubernetes cluster, and it acts as a centralized management cluster. + +![Rancher Manager & CAPI Management Combined](in_cluster_topology.png) + +This architecture offers a simplified deployment of components and provides a +single view of all clusters. On the flip side, it's important to consider that +the number of clusters that can be managed effectively by Cluster API (CAPI) is +limited by the resources available within the single management cluster. diff --git a/versioned_docs/version-0.12/reference-guides/architecture/in_cluster_topology.png b/versioned_docs/version-0.12/reference-guides/architecture/in_cluster_topology.png new file mode 100644 index 00000000..7be5f6c9 Binary files /dev/null and b/versioned_docs/version-0.12/reference-guides/architecture/in_cluster_topology.png differ diff --git a/versioned_docs/version-0.12/reference-guides/architecture/intro.md b/versioned_docs/version-0.12/reference-guides/architecture/intro.md new file mode 100644 index 00000000..512e8603 --- /dev/null +++ b/versioned_docs/version-0.12/reference-guides/architecture/intro.md @@ -0,0 +1,21 @@ +--- +sidebar_position: 0 +--- + +# Introduction + +This guide offers a comprehensive overview of the core components and structure +that power Rancher Turtles and its integration within the Rancher ecosystem. + +:::tip +For guidance about setting up Rancher, refer to +[Rancher Setup](../../getting-started/rancher.md) + +For information on how to install Rancher Turtles, refer to +[Install Rancher Turtles using Rancher Dashboard](../../getting-started/install-rancher-turtles/using_rancher_dashboard.md) +::: + +**A Rancher User will use Rancher to manage clusters. Rancher will be able to use +Cluster API to manage the lifecycle of child Kubernetes clusters.** + +![intro](intro.png) diff --git a/versioned_docs/version-0.12/reference-guides/architecture/intro.png b/versioned_docs/version-0.12/reference-guides/architecture/intro.png new file mode 100644 index 00000000..49d88593 Binary files /dev/null and b/versioned_docs/version-0.12/reference-guides/architecture/intro.png differ diff --git a/versioned_docs/version-0.12/reference-guides/providers/addon-provider-fleet.md b/versioned_docs/version-0.12/reference-guides/providers/addon-provider-fleet.md new file mode 100644 index 00000000..97a19fdc --- /dev/null +++ b/versioned_docs/version-0.12/reference-guides/providers/addon-provider-fleet.md @@ -0,0 +1,38 @@ +--- +sidebar_position: 2 +--- + +# Cluster API Addon Provider Fleet + +## Owerview + +Cluster API Add-on Provider for `Fleet` (CAAPF) is a Cluster API (CAPI) provider that provides integration with [`Fleet`](https://fleet.rancher.io/) to enable the easy deployment of applications to a CAPI provisioned cluster. + +## Functionality + +- The provider will register a newly provisioned CAPI cluster with `Fleet` by creating a `Fleet` `Cluster` instance with the same `name` and `namespace`. Applications can be automatically deployed to the created cluster using `GitOps`. +- The provider will automatically create a Fleet `ClusterGroup` for every CAPI `ClusterClass` in the `ClusterClass` namespace. This enables you to deploy the same applications to all clusters created from the same `ClusterClass`. + +This allows a user to specify either a [`Bundle`](https://fleet.rancher.io/ref-bundle) resource with raw application workloads, or [`GitRepo`](https://fleet.rancher.io/ref-gitrepo) to install applications from git. Each of the resources can provide [`targets`](https://fleet.rancher.io/gitrepo-targets#defining-targets) with any combination of: + +```yaml + targets: + - clusterGroup: # If the cluster is created from cluster-class + - clusterName: +``` + +Additionally, `CAAPF` automatically propagates `CAPI` cluster labels to the `Fleet` cluster resource, so user can specify a target matching common cluster label with: + +```yaml + targets: + - clusterSelector: