From 814bf63854624fd347cde93238dbfe5c95aad6cf Mon Sep 17 00:00:00 2001 From: Per Goncalves da Silva Date: Thu, 3 Oct 2024 21:18:07 +0200 Subject: [PATCH] :book: Content organization (#1324) * organize docs Signed-off-by: Per Goncalves da Silva * First stab at doc hierarchy Signed-off-by: Per Goncalves da Silva --------- Signed-off-by: Per Goncalves da Silva Co-authored-by: Per Goncalves da Silva --- CONTRIBUTING.md | 11 +- Makefile | 13 +- README.md | 128 ++---------------- .../catalogd-api-reference.md | 0 .../crd-ref-docs-gen-config.yaml | 0 .../operator-controller-api-reference.md | 0 docs/assets/logo.svg | 98 ++++++++++++++ .../controlling-catalog-selection.md | 14 +- docs/{refs => concepts}/crd-upgrade-safety.md | 0 .../single-owner-objects.md | 2 +- docs/{drafts => concepts}/upgrade-support.md | 5 + docs/{drafts => concepts}/version-ranges.md | 2 +- docs/contribute/contributing.md | 1 + docs/{drafts => contribute}/developer.md | 2 +- docs/css/extra.css | 10 ++ .../Tasks/create-installer-service-account.md | 3 - ...eferences-permission-enforcement-plugin.md | 13 -- docs/drafts/provided-serviceaccount.md | 31 ----- docs/drafts/refs/olmv1-limitations.md | 3 - docs/drafts/support-watchNamespaces.md | 24 ---- docs/getting-started/olmv1_getting_started.md | 115 ++++++++++++++++ docs/{refs => howto}/catalog-queries.md | 2 - .../derive-service-account.md} | 11 +- .../how-to-channel-based-upgrades.md | 2 +- docs/{drafts => howto}/how-to-pin-version.md | 4 +- .../how-to-version-range-upgrades.md | 4 +- .../how-to-z-stream-upgrades.md | 4 +- docs/index.md | 44 +++--- .../olmv1_architecture.md} | 8 +- docs/project/olmv1_community.md | 15 ++ .../olmv1_design_decisions.md} | 96 +++++++------ .../olmv1_limitations.md} | 9 +- docs/{ => project}/olmv1_roadmap.md | 5 +- .../add-catalog.md} | 7 +- .../downgrade-extension.md} | 4 + .../explore-available-content.md} | 11 +- .../install-extension.md} | 24 +++- .../uninstall-extension.md} | 7 +- .../upgrade-extension.md} | 17 ++- mkdocs.yml | 65 ++++++--- 40 files changed, 487 insertions(+), 327 deletions(-) rename docs/{refs/api => api-reference}/catalogd-api-reference.md (100%) rename docs/{refs/api => api-reference}/crd-ref-docs-gen-config.yaml (100%) rename docs/{refs/api => api-reference}/operator-controller-api-reference.md (100%) create mode 100644 docs/assets/logo.svg rename docs/{drafts => concepts}/controlling-catalog-selection.md (94%) rename docs/{refs => concepts}/crd-upgrade-safety.md (100%) rename docs/{drafts => concepts}/single-owner-objects.md (100%) rename docs/{drafts => concepts}/upgrade-support.md (99%) rename docs/{drafts => concepts}/version-ranges.md (98%) create mode 120000 docs/contribute/contributing.md rename docs/{drafts => contribute}/developer.md (98%) create mode 100644 docs/css/extra.css delete mode 100644 docs/drafts/Tasks/create-installer-service-account.md delete mode 100644 docs/drafts/permissions-for-owner-references-permission-enforcement-plugin.md delete mode 100644 docs/drafts/provided-serviceaccount.md delete mode 100644 docs/drafts/refs/olmv1-limitations.md delete mode 100644 docs/drafts/support-watchNamespaces.md create mode 100644 docs/getting-started/olmv1_getting_started.md rename docs/{refs => howto}/catalog-queries.md (99%) rename docs/{drafts/derive-serviceaccount.md => howto/derive-service-account.md} (96%) rename docs/{drafts => howto}/how-to-channel-based-upgrades.md (96%) rename docs/{drafts => howto}/how-to-pin-version.md (87%) rename docs/{drafts => howto}/how-to-version-range-upgrades.md (91%) rename docs/{drafts => howto}/how-to-z-stream-upgrades.md (90%) rename docs/{drafts/architecture.md => project/olmv1_architecture.md} (98%) create mode 100644 docs/project/olmv1_community.md rename docs/{olmv1_overview.md => project/olmv1_design_decisions.md} (71%) rename docs/{refs/supported-extensions.md => project/olmv1_limitations.md} (85%) rename docs/{ => project}/olmv1_roadmap.md (99%) rename docs/{Tasks/adding-a-catalog.md => tutorials/add-catalog.md} (98%) rename docs/{drafts/downgrading-an-extension.md => tutorials/downgrade-extension.md} (99%) rename docs/{Tasks/exploring-available-packages.md => tutorials/explore-available-content.md} (96%) rename docs/{Tasks/installing-an-extension.md => tutorials/install-extension.md} (80%) rename docs/{Tasks/uninstalling-an-extension.md => tutorials/uninstall-extension.md} (94%) rename docs/{drafts/Tasks/upgrading-an-extension.md => tutorials/upgrade-extension.md} (94%) diff --git a/CONTRIBUTING.md b/CONTRIBUTING.md index 78a858d25..dbd4508d3 100644 --- a/CONTRIBUTING.md +++ b/CONTRIBUTING.md @@ -7,15 +7,16 @@ Operator Controller is an Apache 2.0 licensed project and accepts contributions By contributing to this project you agree to the Developer Certificate of Origin (DCO). This document was created by the Linux Kernel community and is a simple statement that you, as a contributor, have the legal right to make the -contribution. See the [DCO](DCO) file for details. +contribution. See the [DCO](https://github.com/operator-framework/operator-controller/blob/main/DCO) file for details. ## Overview Thank you for your interest in contributing to the Operator-Controller. -As you may or may not know, the Operator-Controller project aims to deliver the user experience described in the [Operator Lifecycle Manager (OLM) V1 Product Requirements Document (PRD)](https://docs.google.com/document/d/1-vsZ2dAODNfoHb7Nf0fbYeKDF7DUqEzS9HqgeMCvbDs/edit). The design requirements captured in the OLM V1 PRD were born from customer and community feedback based on the experience they had with the released version of [OLM V0](github.com/operator-framework/operator-lifecycle-manager). +As you may or may not know, the Operator-Controller project aims to deliver the user experience described in the [Operator Lifecycle Manager (OLM) V1 Product Requirements Document (PRD)](https://docs.google.com/document/d/1-vsZ2dAODNfoHb7Nf0fbYeKDF7DUqEzS9HqgeMCvbDs/edit). The design requirements captured in the OLM V1 PRD were born from customer and community feedback based on the experience they had with the released version of [OLM V0](https://github.com/operator-framework/operator-lifecycle-manager). The user experience captured in the OLM V1 PRD introduces many requirements that are best satisfied by a microservices architecture. The OLM V1 experience currently relies on two projects: + - [The Operator-Controller project](https://github.com/operator-framework/operator-controller/), which is the top level component allowing users to specify operators they'd like to install. - [The Catalogd project](https://github.com/operator-framework/catalogd/), which hosts operator content and helps users discover installable content. @@ -45,6 +46,7 @@ Please keep this workflow in mind as you read through the document. ## How are Milestones Designed? It's unreasonable to attempt to consider all of the design requirements laid out in the [OLM V1 PRD](https://docs.google.com/document/d/1-vsZ2dAODNfoHb7Nf0fbYeKDF7DUqEzS9HqgeMCvbDs/edit) from the onset of the project. Instead, the community attempts to design Milestones with the following principles: + - Milestones are tightly scoped units of work, ideally lasting one to three weeks. - Milestones are derived from the OLM V1 PRD. - Milestones are "demo driven", meaning that a set of acceptance criteria is defined upfront and the milestone is done as soon as some member of the community can run the demo. @@ -52,7 +54,7 @@ It's unreasonable to attempt to consider all of the design requirements laid out This "demo driven" development model will allow us to collect user experience and regularly course correct based on user feedback. Subsequent milestone may revert features or change the user experience based on community feedback. -The project maintainer will create a [GitHub Discussion](github.com/operator-framework/operator-controller/discussions) for the upcoming milestone once we've finalized the current milestone. Please feel encouraged to contribute suggestions for the milestone in the discussion. +The project maintainer will create a [GitHub Discussion](https://github.com/operator-framework/operator-controller/discussions) for the upcoming milestone once we've finalized the current milestone. Please feel encouraged to contribute suggestions for the milestone in the discussion. ## Where are Operator Controller Milestones? @@ -67,6 +69,7 @@ As discussed earlier, the operator-controller adheres to a microservice architec ## Submitting Issues Unsure where to submit an issue? + - [The Operator-Controller project](https://github.com/operator-framework/operator-controller/), which is the top level component allowing users to specify operators they'd like to install. - [The Catalogd project](https://github.com/operator-framework/catalogd/), which hosts operator content and helps users discover installable content. @@ -87,7 +90,7 @@ approach of changes. When contributing changes that require a new dependency, check whether it's feasible to directly vendor that code [without introducing a new dependency](https://go-proverbs.github.io/). -Currently, PRs require at least one approval from a operator-controller maintainer in order to get merged. +Currently, PRs require at least one approval from an operator-controller maintainer in order to get merged. ### Code style diff --git a/Makefile b/Makefile index 746fdfb6b..49a707b3c 100644 --- a/Makefile +++ b/Makefile @@ -312,17 +312,18 @@ quickstart: $(KUSTOMIZE) manifests #EXHELP Generate the installation release man OPERATOR_CONTROLLER_API_REFERENCE_FILENAME := operator-controller-api-reference.md CATALOGD_API_REFERENCE_FILENAME := catalogd-api-reference.md CATALOGD_TMP_DIR := $(ROOT_DIR)/.catalogd-tmp/ +API_REFERENCE_DIR := $(ROOT_DIR)/docs/api-reference crd-ref-docs: $(CRD_REF_DOCS) #EXHELP Generate the API Reference Documents. - rm -f $(ROOT_DIR)/docs/refs/api/$(OPERATOR_CONTROLLER_API_REFERENCE_FILENAME) + rm -f $(API_REFERENCE_DIR)/$(OPERATOR_CONTROLLER_API_REFERENCE_FILENAME) $(CRD_REF_DOCS) --source-path=$(ROOT_DIR)/api \ - --config=$(ROOT_DIR)/docs/refs/api/crd-ref-docs-gen-config.yaml \ - --renderer=markdown --output-path=$(ROOT_DIR)/docs/refs/api/$(OPERATOR_CONTROLLER_API_REFERENCE_FILENAME); + --config=$(API_REFERENCE_DIR)/crd-ref-docs-gen-config.yaml \ + --renderer=markdown --output-path=$(API_REFERENCE_DIR)/$(OPERATOR_CONTROLLER_API_REFERENCE_FILENAME); rm -rf $(CATALOGD_TMP_DIR) git clone --depth 1 --branch $(CATALOGD_VERSION) https://github.com/operator-framework/catalogd $(CATALOGD_TMP_DIR) - rm -f $(ROOT_DIR)/docs/refs/api/$(CATALOGD_API_REFERENCE_FILENAME) + rm -f $(API_REFERENCE_DIR)/$(CATALOGD_API_REFERENCE_FILENAME) $(CRD_REF_DOCS) --source-path=$(CATALOGD_TMP_DIR)/api \ - --config=$(ROOT_DIR)/docs/refs/api/crd-ref-docs-gen-config.yaml \ - --renderer=markdown --output-path=$(ROOT_DIR)/docs/refs/api/$(CATALOGD_API_REFERENCE_FILENAME) + --config=$(API_REFERENCE_DIR)/crd-ref-docs-gen-config.yaml \ + --renderer=markdown --output-path=$(API_REFERENCE_DIR)/$(CATALOGD_API_REFERENCE_FILENAME) rm -rf $(CATALOGD_TMP_DIR)/ VENVDIR := $(abspath docs/.venv) diff --git a/README.md b/README.md index c4a1ce3af..298d7098d 100644 --- a/README.md +++ b/README.md @@ -2,137 +2,25 @@ The operator-controller is the central component of Operator Lifecycle Manager (OLM) v1. It extends Kubernetes with an API through which users can install extensions. -## Mission +## Overview + +OLM v1 is the follow-up to [OLM v0](https://github.com/operator-framework/operator-lifecycle-manager). Its purpose is to provide APIs, +controllers, and tooling that support the packaging, distribution, and lifecycling of Kubernetes extensions. It aims to: -OLM’s purpose is to provide APIs, controllers, and tooling that support the packaging, distribution, and lifecycling of Kubernetes extensions. It aims to: - align with Kubernetes designs and user assumptions - provide secure, high-quality, and predictable user experiences centered around declarative GitOps concepts - give cluster admins the minimal necessary controls to build their desired cluster architectures and to have ultimate control -## Overview - -OLM v1 is the follow-up to OLM v0, located [here](https://github.com/operator-framework/operator-lifecycle-manager). - OLM v1 consists of two different components: + * operator-controller (this repository) * [catalogd](https://github.com/operator-framework/catalogd) -For a more complete overview of OLM v1 and how it differs from OLM v0, see our [overview](docs/olmv1_overview.md). - -### Installation - -The following script will install OLMv1 on a Kubernetes cluster. If you don't have one, you can deploy a Kubernetes cluster with [KIND](https://sigs.k8s.io/kind). - -> [!CAUTION] -> Operator-Controller depends on [cert-manager](https://cert-manager.io/). Running the following command -> may affect an existing installation of cert-manager and cause cluster instability. - -The latest version of Operator Controller can be installed with the following command: - -```bash -curl -L -s https://github.com/operator-framework/operator-controller/releases/latest/download/install.sh | bash -s -``` - -## Getting Started with OLM v1 - -This quickstart procedure will guide you through the following processes: -* Deploying a catalog -* Installing, upgrading, or downgrading an extension -* Deleting catalogs and extensions - -### Create a Catalog - -OLM v1 is designed to source content from an on-cluster catalog in the file-based catalog ([FBC](https://olm.operatorframework.io/docs/reference/file-based-catalogs/#docs)) format. -These catalogs are deployed and configured through the `ClusterCatalog` resource. More information on adding catalogs -can be found [here](./docs/Tasks/adding-a-catalog). - -The following example uses the official [OperatorHub](https://operatorhub.io) catalog that contains many different -extensions to choose from. Note that this catalog contains packages designed to work with OLM v0, and that not all packages -will work with OLM v1. More information on catalog exploration and content compatibility can be found [here](./docs/refs/catalog-queries.md). - -To create the catalog, run the following command: - -```bash -# Create ClusterCatalog -kubectl apply -f - < + + + +logo + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + diff --git a/docs/drafts/controlling-catalog-selection.md b/docs/concepts/controlling-catalog-selection.md similarity index 94% rename from docs/drafts/controlling-catalog-selection.md rename to docs/concepts/controlling-catalog-selection.md index e91a1eb0f..544f36be5 100644 --- a/docs/drafts/controlling-catalog-selection.md +++ b/docs/concepts/controlling-catalog-selection.md @@ -27,7 +27,7 @@ spec: catalog: selector: matchLabels: - olm.operatorframework.io/metadata.name: my-catalog + olm.operatorframework.io/metadata.name: my-content-management ``` In this example, only the catalog named `my-catalog` will be considered when resolving `my-package`. @@ -93,7 +93,7 @@ spec: - key: olm.operatorframework.io/metadata.name operator: NotIn values: - - unwanted-catalog + - unwanted-content-management ``` This excludes the catalog named `unwanted-catalog` from consideration. @@ -134,7 +134,7 @@ spec: source: type: image image: - ref: quay.io/example/high-priority-catalog:latest + ref: quay.io/example/high-priority-content-management:latest ``` Catalogs have a default priority of `0`. The priority can be any 32-bit integer. Catalogs with higher priority values are preferred during bundle resolution. @@ -171,7 +171,7 @@ If the system cannot resolve to a single bundle due to ambiguity, it will genera source: type: image image: - ref: quay.io/example/catalog-a:latest + ref: quay.io/example/content-management-a:latest ``` ```yaml @@ -186,7 +186,7 @@ If the system cannot resolve to a single bundle due to ambiguity, it will genera source: type: image image: - ref: quay.io/example/catalog-b:latest + ref: quay.io/example/content-management-b:latest ``` NB: an `olm.operatorframework.io/metadata.name` label will be added automatically to ClusterCatalogs when applied @@ -209,8 +209,8 @@ If the system cannot resolve to a single bundle due to ambiguity, it will genera 3. **Apply the Resources** ```shell - kubectl apply -f catalog-a.yaml - kubectl apply -f catalog-b.yaml + kubectl apply -f content-management-a.yaml + kubectl apply -f content-management-b.yaml kubectl apply -f install-my-operator.yaml ``` diff --git a/docs/refs/crd-upgrade-safety.md b/docs/concepts/crd-upgrade-safety.md similarity index 100% rename from docs/refs/crd-upgrade-safety.md rename to docs/concepts/crd-upgrade-safety.md diff --git a/docs/drafts/single-owner-objects.md b/docs/concepts/single-owner-objects.md similarity index 100% rename from docs/drafts/single-owner-objects.md rename to docs/concepts/single-owner-objects.md index 0ed7dfcac..0553f70a8 100644 --- a/docs/drafts/single-owner-objects.md +++ b/docs/concepts/single-owner-objects.md @@ -1,4 +1,3 @@ - # OLM Ownership Enforcement for `ClusterExtensions` In OLM, **a Kubernetes resource can only be owned by a single `ClusterExtension` at a time**. This ensures that resources within a Kubernetes cluster are managed consistently and prevents conflicts between multiple `ClusterExtensions` attempting to control the same resource. @@ -15,6 +14,7 @@ Operator bundles provide `CustomResourceDefinitions` (CRDs), which are part of a ### 2. `ClusterExtensions` Cannot Share Objects + OLM's single-owner policy means that **`ClusterExtensions` cannot share ownership of any resources**. If one `ClusterExtension` manages a specific resource (e.g., a `Deployment`, `CustomResourceDefinition`, or `Service`), another `ClusterExtension` cannot claim ownership of the same resource. Any attempt to do so will be blocked by the system. ## Error Messages diff --git a/docs/drafts/upgrade-support.md b/docs/concepts/upgrade-support.md similarity index 99% rename from docs/drafts/upgrade-support.md rename to docs/concepts/upgrade-support.md index 367a57ec1..9bc6e31ad 100644 --- a/docs/drafts/upgrade-support.md +++ b/docs/concepts/upgrade-support.md @@ -1,3 +1,8 @@ +--- +hide: + - toc +--- + # Upgrade support This document explains how OLM v1 handles upgrades. diff --git a/docs/drafts/version-ranges.md b/docs/concepts/version-ranges.md similarity index 98% rename from docs/drafts/version-ranges.md rename to docs/concepts/version-ranges.md index d247cc19f..75e88f04e 100644 --- a/docs/drafts/version-ranges.md +++ b/docs/concepts/version-ranges.md @@ -4,7 +4,7 @@ This document explains how to specify a version range to install or update an ex You define a version range in a ClusterExtension's custom resource (CR) file. -## Specifying a version range in the CR +### Specifying a version range in the CR If you specify a version range in the ClusterExtension's CR, OLM 1.0 installs or updates the latest version of the extension that can be resolved within the version range. The resolved version is the latest version of the extension that satisfies the dependencies and constraints of the extension and the environment. diff --git a/docs/contribute/contributing.md b/docs/contribute/contributing.md new file mode 120000 index 000000000..f939e75f2 --- /dev/null +++ b/docs/contribute/contributing.md @@ -0,0 +1 @@ +../../CONTRIBUTING.md \ No newline at end of file diff --git a/docs/drafts/developer.md b/docs/contribute/developer.md similarity index 98% rename from docs/drafts/developer.md rename to docs/contribute/developer.md index 31959de6c..b97c9d693 100644 --- a/docs/drafts/developer.md +++ b/docs/contribute/developer.md @@ -177,4 +177,4 @@ done ## Contributing -Refer to [CONTRIBUTING.md](./CONTRIBUTING.md) for more information. +Refer to [CONTRIBUTING.md](contributing.md) for more information. diff --git a/docs/css/extra.css b/docs/css/extra.css new file mode 100644 index 000000000..0553b3b97 --- /dev/null +++ b/docs/css/extra.css @@ -0,0 +1,10 @@ +/* Hide banner title */ +.md-header__title { + visibility: hidden; +} + +/* Make top-level navigation items bold */ +.md-nav__item--active > .md-nav__link, /* Active top-level items */ +.md-nav__item--nested > .md-nav__link { /* Nested top-level items */ + font-weight: bold; +} \ No newline at end of file diff --git a/docs/drafts/Tasks/create-installer-service-account.md b/docs/drafts/Tasks/create-installer-service-account.md deleted file mode 100644 index e66c06076..000000000 --- a/docs/drafts/Tasks/create-installer-service-account.md +++ /dev/null @@ -1,3 +0,0 @@ -# Create Installer Service Account - -Placeholder. We need to document this. \ No newline at end of file diff --git a/docs/drafts/permissions-for-owner-references-permission-enforcement-plugin.md b/docs/drafts/permissions-for-owner-references-permission-enforcement-plugin.md deleted file mode 100644 index f80d332e0..000000000 --- a/docs/drafts/permissions-for-owner-references-permission-enforcement-plugin.md +++ /dev/null @@ -1,13 +0,0 @@ -# Configuring a service account when the cluster uses the `OwnerReferencesPermissionEnforcement` admission plugin - -The [`OwnerReferencesPermissionEnforcement`](https://kubernetes.io/docs/reference/access-authn-authz/admission-controllers/#ownerreferencespermissionenforcement) admission plugin requires a user to have permission to set finalizers on owner objects when creating or updating an object to contain an `ownerReference` with `blockOwnerDeletion: true`. - -When operator-controller installs or upgrades a `ClusterExtension`, it sets an `ownerReference` on each object with `blockOwnerDeletion: true`. Therefore serviceaccounts configured in `.spec.serviceAccount.name` must have the following permission in a bound `ClusterRole`: - - ```yaml - - apiGroups: ["olm.operatorframework.io"] - resources: ["clusterextensions/finalizers"] - verbs: ["update"] - resourceNames: [""] - ``` - diff --git a/docs/drafts/provided-serviceaccount.md b/docs/drafts/provided-serviceaccount.md deleted file mode 100644 index 33f4501e9..000000000 --- a/docs/drafts/provided-serviceaccount.md +++ /dev/null @@ -1,31 +0,0 @@ -# Provided ServiceAccount for ClusterExtension Installation and Management - -Adhering to OLM v1's "Secure by Default" tenet, OLM v1 does not have the permissions -necessary to install content. This follows the least privilege principle and reduces -the chance of a [confused deputy attack](https://en.wikipedia.org/wiki/Confused_deputy_problem). -Instead, users must explicitly specify a ServiceAccount that will be used to perform the -installation and management of a specific ClusterExtension. The ServiceAccount is specified -in the ClusterExtension manifest as follows: - -```yaml -apiVersion: olm.operatorframework.io/v1alpha1 -kind: ClusterExtension -metadata: - name: argocd -spec: - source: - sourceType: Catalog - catalog: - packageName: argocd-operator - version: 0.6.0 - install: - namespace: argocd - serviceAccount: - name: argocd-installer -``` - -The ServiceAccount must be configured with the RBAC permissions required by the ClusterExtension. -If the permissions do not meet the minimum requirements, installation will fail. If no ServiceAccount -is provided in the ClusterExtension manifest, then the manifest will be rejected. - -//TODO: Add link to documentation on determining least privileges required for the ServiceAccount \ No newline at end of file diff --git a/docs/drafts/refs/olmv1-limitations.md b/docs/drafts/refs/olmv1-limitations.md deleted file mode 100644 index 1c351f9e9..000000000 --- a/docs/drafts/refs/olmv1-limitations.md +++ /dev/null @@ -1,3 +0,0 @@ -# Current OLM v1 Limitations - -Placeholder. We need to document this. \ No newline at end of file diff --git a/docs/drafts/support-watchNamespaces.md b/docs/drafts/support-watchNamespaces.md deleted file mode 100644 index b10c279cc..000000000 --- a/docs/drafts/support-watchNamespaces.md +++ /dev/null @@ -1,24 +0,0 @@ -# Install Modes and WatchNamespaces in OMLv1 - -Operator Lifecycle Manager (OLM) operates with cluster-admin privileges, enabling it to grant necessary permissions to the Extensions it deploys. For extensions packaged as [`RegistryV1`][registryv1] bundles, it's the responsibility of the authors to specify supported `InstallModes` in the ClusterServiceVersion ([CSV][csv]). InstallModes define the operational scope of the extension within the Kubernetes cluster, particularly in terms of namespace availability. The four recognized InstallModes are as follows: - -1. OwnNamespace: This mode allows the extension to monitor and respond to events within its own deployment namespace. -1. SingleNamespace: In this mode, the extension is set up to observe events in a single, specific namespace other than the one it is deployed in. -1. MultiNamespace: This enables the extension to function across multiple specified namespaces. -1. AllNamespaces: Under this mode, the extension is equipped to monitor events across all namespaces within the cluster. - -When creating a cluster extension, users have the option to define a list of `watchNamespaces`. This list determines the specific namespaces within which they intend the operator to operate. The configuration of `watchNamespaces` must align with the InstallModes supported by the extension as specified by the bundle author. The supported configurations in the order of preference are as follows: - - -| Length of `watchNamespaces` specified through ClusterExtension | Allowed values | Supported InstallMode in CSV | Description | -|------------------------------|-------------------------------------------------------|----------------------|-----------------------------------------------------------------| -| **0 (Empty/Unset)** | - | AllNamespaces | Extension monitors all namespaces. | -| | - | OwnNamespace | Supported when `AllNamespaces` is false. Extension only active in its deployment namespace. | -| **1 (Single Entry)** | `""` (Empty String) | AllNamespaces | Extension monitors all namespaces. | -| | Entry equals Install Namespace | OwnNamespace | Extension watches only its install namespace. | -| | Entry is a specific namespace (not the Install Namespace) | SingleNamespace | Extension monitors a single, specified namespace in the spec. | -| **>1 (Multiple Entries)** | Entries are specific, multiple namespaces | MultiNamespace | Extension monitors each of the specified multiple namespaces in the spec. - - -[registryv1]: https://olm.operatorframework.io/docs/tasks/creating-operator-manifests/#writing-your-operator-manifests -[csv]: https://olm.operatorframework.io/docs/concepts/crds/clusterserviceversion/ \ No newline at end of file diff --git a/docs/getting-started/olmv1_getting_started.md b/docs/getting-started/olmv1_getting_started.md new file mode 100644 index 000000000..77760f4fc --- /dev/null +++ b/docs/getting-started/olmv1_getting_started.md @@ -0,0 +1,115 @@ +### Installation + +The following script will install OLMv1 on a Kubernetes cluster. If you don't have one, you can deploy a Kubernetes cluster with [KIND](https://sigs.k8s.io/kind). + +> [!CAUTION] +> Operator-Controller depends on [cert-manager](https://cert-manager.io/). Running the following command +> may affect an existing installation of cert-manager and cause cluster instability. + +The latest version of Operator Controller can be installed with the following command: + +```bash +curl -L -s https://github.com/operator-framework/operator-controller/releases/latest/download/install.sh | bash -s +``` + +### Getting Started with OLM v1 + +This quickstart procedure will guide you through the following processes: + +* Deploying a catalog +* Installing, upgrading, or downgrading an extension +* Deleting catalogs and extensions + +### Create a Catalog + +OLM v1 is designed to source content from an on-cluster catalog in the file-based catalog ([FBC](https://olm.operatorframework.io/docs/reference/file-based-catalogs/#docs)) format. +These catalogs are deployed and configured through the `ClusterCatalog` resource. More information on adding catalogs +can be found [here](../tutorials/add-catalog.md). + +The following example uses the official [OperatorHub](https://operatorhub.io) catalog that contains many different +extensions to choose from. Note that this catalog contains packages designed to work with OLM v0, and that not all packages +will work with OLM v1. More information on catalog exploration and content compatibility can be found [here](../howto/catalog-queries.md). + +To create the catalog, run the following command: + +```bash +# Create ClusterCatalog +kubectl apply -f - < ``` - - ## Package queries Available packages in a catalog diff --git a/docs/drafts/derive-serviceaccount.md b/docs/howto/derive-service-account.md similarity index 96% rename from docs/drafts/derive-serviceaccount.md rename to docs/howto/derive-service-account.md index fec1649df..599fc103a 100644 --- a/docs/drafts/derive-serviceaccount.md +++ b/docs/howto/derive-service-account.md @@ -1,7 +1,7 @@ # Derive minimal ServiceAccount required for ClusterExtension Installation and Management -OLM v1 does not have permission to install extensions on a cluster by default. In order to install a [supported bundle](../refs/supported-extensions.md), -OLM must be provided a ServiceAccount configured with the appropriate permissions. For more information, see the [provided ServiceAccount](./provided-serviceaccount.md) documentation. +OLM v1 does not have permission to install extensions on a cluster by default. In order to install a [supported bundle](../project/olmv1_limitations.md), +OLM must be provided a ServiceAccount configured with the appropriate permissions. This document serves as a guide for how to derive the RBAC necessary to install a bundle. @@ -12,6 +12,7 @@ This bundle image contains all the manifests that make up the extension (e.g. `C as well as a [`ClusterServiceVersion`](https://olm.operatorframework.io/docs/concepts/crds/clusterserviceversion/) (CSV) that describes the extension and its service account's permission requirements. The service account must have permissions to: + - create and manage the extension's `CustomResourceDefinition`s - create and manage the resources packaged in the bundle - grant the extension controller's service account the permissions it requires for its operation @@ -30,7 +31,7 @@ Depending on the scope, each permission will need to be added to either a `Clust ### Example The following example illustrates the process of deriving the minimal RBAC required to install the [ArgoCD Operator](https://operatorhub.io/operator/argocd-operator) [v0.6.0](https://operatorhub.io/operator/argocd-operator/alpha/argocd-operator.v0.6.0) provided by [OperatorHub.io](https://operatorhub.io/). -The final permission set can be found in the [ClusterExtension sample manifest](../../config/samples/olm_v1alpha1_clusterextension.yaml) in the [samples](../../config/samples/olm_v1alpha1_clusterextension.yaml) directory. +The final permission set can be found in the [ClusterExtension sample manifest](https://github.com/operator-framework/operator-controller/blob/main/config/samples/olm_v1alpha1_clusterextension.yaml) in the [samples](https://github.com/operator-framework/operator-controller/blob/main/config/samples/olm_v1alpha1_clusterextension.yaml) directory. The bundle includes the following manifests, which can be found [here](https://github.com/argoproj-labs/argocd-operator/tree/da6b8a7e68f71920de9545152714b9066990fc4b/deploy/olm-catalog/argocd-operator/0.6.0): @@ -301,7 +302,7 @@ Once the installer service account required cluster-scoped and namespace-scoped 6. Create the `RoleBinding` between the installer service account and its role 7. Create the `ClusterExtension` -A manifest with the full set of resources can be found [here](../../config/samples/olm_v1alpha1_clusterextension.yaml). +A manifest with the full set of resources can be found [here](https://github.com/operator-framework/operator-controller/blob/main/config/samples/olm_v1alpha1_clusterextension.yaml). ### Alternatives @@ -346,6 +347,6 @@ kubectl create clusterrolebinding my-cluster-extension-installer-role-binding \ #### hack/tools/catalog In the spirit of making this process more tenable until the proper tools are in place, the scripts -in [hack/tools/catalogs](../../hack/tools/catalogs) were created to help the user navigate and search catalogs as well +in [hack/tools/catalogs](https://github.com/operator-framework/operator-controller/blob/main/hack/tools/catalogs) were created to help the user navigate and search catalogs as well as to generate the minimal RBAC requirements. These tools are offered as is, with no guarantees on their correctness, support, or maintenance. For more information, see [Hack Catalog Tools](https://github.com/operator-framework/operator-controller/blob/main/hack/tools/catalogs/README.md). \ No newline at end of file diff --git a/docs/drafts/how-to-channel-based-upgrades.md b/docs/howto/how-to-channel-based-upgrades.md similarity index 96% rename from docs/drafts/how-to-channel-based-upgrades.md rename to docs/howto/how-to-channel-based-upgrades.md index f1692422f..501a7f951 100644 --- a/docs/drafts/how-to-channel-based-upgrades.md +++ b/docs/howto/how-to-channel-based-upgrades.md @@ -1,4 +1,4 @@ -## How-to: Channel-Based Automatic Upgrades +# Channel-Based Automatic Upgrades A "channel" is a package author defined stream of updates for an extension. A set of channels can be set in the Catalog source to restrict automatic updates to the set of versions defined in those channels. diff --git a/docs/drafts/how-to-pin-version.md b/docs/howto/how-to-pin-version.md similarity index 87% rename from docs/drafts/how-to-pin-version.md rename to docs/howto/how-to-pin-version.md index 17bd7e1c6..606b994aa 100644 --- a/docs/drafts/how-to-pin-version.md +++ b/docs/howto/how-to-pin-version.md @@ -1,4 +1,4 @@ -## How-to: Version Pin and Disable Automatic Updates +# Pin Version and Disable Automatic Updates To disable automatic updates, and pin the version of an extension, set `version` in the Catalog source to a specific version (e.g. 1.2.3). @@ -21,4 +21,4 @@ spec: name: argocd-installer ``` -For more information on SemVer version ranges see [version ranges](version-ranges.md) +For more information on SemVer version ranges see [version ranges](../concepts/version-ranges.md) diff --git a/docs/drafts/how-to-version-range-upgrades.md b/docs/howto/how-to-version-range-upgrades.md similarity index 91% rename from docs/drafts/how-to-version-range-upgrades.md rename to docs/howto/how-to-version-range-upgrades.md index 9a5c305ee..ddb753fba 100644 --- a/docs/drafts/how-to-version-range-upgrades.md +++ b/docs/howto/how-to-version-range-upgrades.md @@ -1,4 +1,4 @@ -## How-to: Version Range Automatic Updates +# Version Range Automatic Updates Set the version for the desired package in the Catalog source to a comparison string, like `">=3.0, <3.6"`, to restrict the automatic updates to the version range. Any new version of the extension released in the catalog within this range will be automatically applied. @@ -21,4 +21,4 @@ spec: name: argocd-installer ``` -For more information on SemVer version ranges see [version-rages](version-ranges.md) \ No newline at end of file +For more information on SemVer version ranges see [version-rages](../concepts/version-ranges.md) \ No newline at end of file diff --git a/docs/drafts/how-to-z-stream-upgrades.md b/docs/howto/how-to-z-stream-upgrades.md similarity index 90% rename from docs/drafts/how-to-z-stream-upgrades.md rename to docs/howto/how-to-z-stream-upgrades.md index 835abc2b5..8666e09b7 100644 --- a/docs/drafts/how-to-z-stream-upgrades.md +++ b/docs/howto/how-to-z-stream-upgrades.md @@ -1,4 +1,4 @@ -## How-to: Z-Stream Automatic Updates +# Z-Stream Automatic Updates To restrict automatic updates to only z-stream patches and avoid breaking changes, use the `"~"` version range operator when setting the version for the desired package in Catalog source. @@ -21,4 +21,4 @@ spec: name: argocd-installer ``` -For more information on SemVer version ranges see [version ranges](version-ranges.md) +For more information on SemVer version ranges see [version ranges](../concepts/version-ranges.md) diff --git a/docs/index.md b/docs/index.md index 6fda98519..942cdd938 100644 --- a/docs/index.md +++ b/docs/index.md @@ -1,17 +1,24 @@ -# What is Operator Lifecycle Manager (OLM)? +--- +hide: + - toc +--- -Operator Lifecycle Manager (OLM) is an open-source [CNCF](https://www.cncf.io/) project with the mission to manage the -lifecycle of cluster extensions centrally and declaratively on Kubernetes clusters. Its purpose is to make installing, +# Overview + +Operator Lifecycle Manager (OLM) is an open-source [CNCF](https://www.cncf.io/) project with the mission to manage the +lifecycle of cluster extensions centrally and declaratively on Kubernetes clusters. Its purpose is to make installing, running, and updating functional extensions to the cluster easy, safe, and reproducible for cluster administrators and PaaS administrators. -Previously, OLM was focused on a particular type of cluster extension: [Operators](https://operatorhub.io/what-is-an-operator#:~:text=is%20an%20Operator-,What%20is%20an%20Operator%20after%20all%3F,or%20automation%20software%20like%20Ansible.). +Previously, OLM was focused on a particular type of cluster extension: [Operators](https://operatorhub.io/what-is-an-operator#:~:text=is%20an%20Operator-,What%20is%20an%20Operator%20after%20all%3F,or%20automation%20software%20like%20Ansible.). Operators are a method of packaging, deploying, and managing a Kubernetes application. An Operator is composed of one or more controllers paired with one or both of the following objects: -* One or more API extensions + +* One or more API extensions * One or more [CustomResourceDefinitions](https://kubernetes.io/docs/concepts/extend-kubernetes/api-extension/custom-resources/) (CRDs). OLM helped define lifecycles for these extensions: from packaging and distribution to installation, configuration, upgrade, and removal. The first iteration of OLM, termed OLM v0, included several concepts and features targeting the stability, security, and supportability of the life-cycled applications, for instance: + * A dependency model that enabled cluster extensions to focus on their primary purpose by delegating out of scope behavior to dependencies * A constraint model that allowed cluster extension developers to define support limitations such as conflicting extensions, and minimum kubernetes versions * A namespace-based multi-tenancy model in lieu of namespace-scoped CRDs @@ -20,11 +27,13 @@ The first iteration of OLM, termed OLM v0, included several concepts and feature Since its initial release, OLM has helped catalyse the growth of Operators throughout the Kubernetes ecosystem. [OperatorHub.io](https://operatorhub.io/) is a popular destination for discovering Operators, and boasts over 300 packages from many different vendors. -# Why are we building OLM v1? +## Why are we building OLM v1? + +The Operator Lifecycle Manager (OLM) has been in production for over five years, serving as a critical component in managing Kubernetes Operators. +Over this time, the community has gathered valuable insights from real-world usage, identifying both the strengths and limitations of the initial design, +and validating the design's initial assumptions. This process led to a complete redesign and rewrite of OLM that, compared to its predecessor, aims to +provide: -OLM v0 has been in production for over 5 years, and the community to leverage this experience and question the initial -goals and assumptions of the project. OLM v1 is a complete redesign and rewrite of OLM taking into account this accumulated experience. -Compared to its predecessor, amongst other things, OLM v1 aims to provide: * A simpler API surface and mental model * Less opinionated automation and greater flexibility * Support for Kubernetes applications beyond only Operators @@ -32,18 +41,5 @@ Compared to its predecessor, amongst other things, OLM v1 aims to provide: * Helm Chart support * GitOps support -For an in-depth look at OLM v1, please see the [OLM v1 Overview](olmv1_overview.md) and the [Roadmap](olmv1_roadmap.md). - -# The OLM community - -In this next iteration of OLM, the community has also taken care to make it as contributor-friendly as possible, and welcomes new contributors. -The project is tracked in a [GitHub project](https://github.com/orgs/operator-framework/projects/8/), -which provides a great entry point to quickly find something interesting to work on and contribute. - -You can reach out to the OLM community for feedbacks/discussions/contributions in the following channels: - - * Kubernetes Slack channel: [#olm-dev](https://kubernetes.slack.com/messages/olm-dev) - * [Operator Framework on Google Groups](https://groups.google.com/forum/#!forum/operator-framework) - * Weekly in-person Working Group meeting: [olm-wg](https://github.com/operator-framework/community#operator-lifecycle-manager-working-group) - -For further information on contributing, please consult the [Contribution Guide](../CONTRIBUTING.md) +To learn more about where v1 one came from, and where it's going, please see [Multi-Tenancy Challenges, Lessons Learned, and Design Shifts](project/olmv1_design_decisions.md) +and our feature [Roadmap](project/olmv1_roadmap.md). diff --git a/docs/drafts/architecture.md b/docs/project/olmv1_architecture.md similarity index 98% rename from docs/drafts/architecture.md rename to docs/project/olmv1_architecture.md index 5be36f9af..1672fae64 100644 --- a/docs/drafts/architecture.md +++ b/docs/project/olmv1_architecture.md @@ -1,5 +1,9 @@ +--- +hide: + - toc +--- -## OLM V1 Architecture +# OLM V1 Architecture This document describes the OLM v1 architecture. OLM v1 consists of two main components: @@ -54,6 +58,7 @@ flowchart TB ### Operator-controller: operator-controller is the central component of OLM v1. It is responsible: + * managing a cache of catalog metadata provided by catalogd through its HTTP server * keeping the catalog metadata cache up-to-date with the current state of the catalogs * locating the right `registry+v1` bundle, if any, that meet the constraints expressed in the `ClusterExtension` resource, such as package name, version range, channel, etc. given the current state of the cluster @@ -61,6 +66,7 @@ operator-controller is the central component of OLM v1. It is responsible: * applying the bundle manifests: installing or updating the content. It has three main sub-components: + * Cluster Extension Controller: * Queries the catalogd (catalogd HTTP Server) to get catalog information. * Once received the catalog information is saved to catalog-cache. The cache will be updated automatically if a Catalog is noticed to have a different resolved image reference. diff --git a/docs/project/olmv1_community.md b/docs/project/olmv1_community.md new file mode 100644 index 000000000..55e8cf1b8 --- /dev/null +++ b/docs/project/olmv1_community.md @@ -0,0 +1,15 @@ + +OLM is an open-source [CNCF](https://www.cncf.io/) project with a friendly and supportive community of developers, testers, + and documentation experts with a passion for Kubernetes. + +Through the effort of redesigning OLM, the community also took the opportunity to make the project more accessible, +and contributor-friendly through its weekly meetings, continuous planning, and a [GitHub project](https://github.com/orgs/operator-framework/projects/8/) + tracker that provides a convenient entry point to quickly find something interesting to work on and contribute. + +You can reach out to the OLM community for feedbacks/discussions/contributions in the following channels: + +* Kubernetes Slack channel: [#olm-dev](https://kubernetes.slack.com/messages/olm-dev) +* [Operator Framework on Google Groups](https://groups.google.com/forum/#!forum/operator-framework) +* Weekly in-person Working Group meeting: [olm-wg](https://github.com/operator-framework/community#operator-lifecycle-manager-working-group) + +For further information on contributing, please consult the [Contribution Guide](../contribute/contributing.md) \ No newline at end of file diff --git a/docs/olmv1_overview.md b/docs/project/olmv1_design_decisions.md similarity index 71% rename from docs/olmv1_overview.md rename to docs/project/olmv1_design_decisions.md index 417f6d5ba..f8017455d 100644 --- a/docs/olmv1_overview.md +++ b/docs/project/olmv1_design_decisions.md @@ -1,48 +1,56 @@ -# OLM v1 Overview +# Multi-Tenancy Challenges, Lessons Learned, and Design Shifts -## What won't OLMv1 do that OLMv0 did? +This provides historical context on the design explorations and challenges that led to substantial design shifts between +OLM v1 and its predecessor. It explains the technical reasons why OLM v1 cannot support major v0 features, such as, +multi-tenancy, and namespace-specific controller configurations. Finally, it highlights OLM v1’s shift toward +more secure, predictable, and simple operations while moving away from some of the complex, error-prone features of OLM v0. -TL;DR: OLMv1 cannot feasibly support multi-tenancy or any feature that assumes multi-tenancy. All multi-tenancy features end up falling over because of the global API system of Kubernetes. While this short conclusion may be unsatisfying, the reasons are complex and intertwined. +## What won't OLM v1 do that OLM v0 did? + +TL;DR: OLM v1 cannot feasibly support multi-tenancy or any feature that assumes multi-tenancy. All multi-tenancy features end up falling over because of the global API system of Kubernetes. While this short conclusion may be unsatisfying, the reasons are complex and intertwined. ### Historical Context Nearly every active contributor in the Operator Framework project contributed to design explorations and prototypes over an entire year. For each of these design explorations, there are complex webs of features and assumptions that are necessary to understand the context that ultimately led to a conclusion of infeasibility. Here is a sampling of some of the ideas we explored: -- [[WIP] OLM v1's approach to multi-tenancy](https://docs.google.com/document/d/1xTu7XadmqD61imJisjnP9A6k38_fiZQ8ThvZSDYszog/edit#heading=h.m19itc78n5rw) -- [OLMv1 Multi-tenancy Brainstorming](https://docs.google.com/document/d/1ihFuJR9YS_GWW4_p3qjXu3WjvK0NIPIkt0qGixirQO8/edit#heading=h.vy9860qq1j01) + +- [OLM v1's approach to multi-tenancy](https://docs.google.com/document/d/1xTu7XadmqD61imJisjnP9A6k38_fiZQ8ThvZSDYszog/edit#heading=h.m19itc78n5rw) +- [OLM v1 Multi-tenancy Brainstorming](https://docs.google.com/document/d/1ihFuJR9YS_GWW4_p3qjXu3WjvK0NIPIkt0qGixirQO8/edit#heading=h.vy9860qq1j01) ### Watched namespaces cannot be configured in a first-class API -OLMv1 will not have a first-class API for configuring the namespaces that a controller will watch. +OLM v1 will not have a first-class API for configuring the namespaces that a controller will watch. Kubernetes APIs are global. Kubernetes is designed with the assumption that a controller WILL reconcile an object no matter where it is in the cluster. However, Kubernetes does not assume that a controller will be successful when it reconciles an object. The Kubernetes design assumptions are: + - CRDs and their controllers are trusted cluster extensions. - If an object for an API exists a controller WILL reconcile it, no matter where it is in the cluster. -OLMv1 will make the same assumption that Kubernetes does and that users of Kubernetes APIs do. That is: If a user has RBAC to create an object in the cluster, they can expect that a controller exists that will reconcile that object. If this assumption does not hold, it will be considered a configuration issue, not an OLMv1 bug. +OLM v1 will make the same assumption that Kubernetes does and that users of Kubernetes APIs do. That is: If a user has RBAC to create an object in the cluster, they can expect that a controller exists that will reconcile that object. If this assumption does not hold, it will be considered a configuration issue, not an OLM v1 bug. This means that it is a best practice to implement and configure controllers to have cluster-wide permission to read and update the status of their primary APIs. It does not mean that a controller needs cluster-wide access to read/write secondary APIs. If a controller can update the status of its primary APIs, it can tell users when it lacks permission to act on secondary APIs. ### Dependencies based on watched namespaces -Since there will be no first-class support for configuration of watched namespaces, OLMv1 cannot resolve dependencies among bundles based on where controllers are watching. +Since there will be no first-class support for configuration of watched namespaces, OLM v1 cannot resolve dependencies among bundles based on where controllers are watching. -However, not all bundle constraints are based on dependencies among bundles from different packages. OLMv1 will be able to support constraints against cluster state. For example, OLMv1 could support a “kubernetesVersionRange” constraint that blocks installation of a bundle if the current kubernetes cluster version does not fall into the specified range. +However, not all bundle constraints are based on dependencies among bundles from different packages. OLM v1 will be able to support constraints against cluster state. For example, OLM v1 could support a “kubernetesVersionRange” constraint that blocks installation of a bundle if the current kubernetes cluster version does not fall into the specified range. #### Background -For packages that specify API-based dependencies, OLMv0’s dependency checker knows which controllers are watching which namespaces. While OLMv1 will have awareness of which APIs are present on a cluster (via the discovery API), it will not know which namespaces are being watched for reconciliation of those APIs. Therefore dependency resolution based solely on API availability would only work in cases where controllers are configured to watch all namespaces. +For packages that specify API-based dependencies, OLMv0’s dependency checker knows which controllers are watching which namespaces. While OLM v1 will have awareness of which APIs are present on a cluster (via the discovery API), it will not know which namespaces are being watched for reconciliation of those APIs. Therefore dependency resolution based solely on API availability would only work in cases where controllers are configured to watch all namespaces. For packages that specify package-based dependencies, OLMv0’s dependency checker again knows which controllers are watching which namespaces. This case is challenging for a variety of reasons: + 1. How would a dependency resolver know which extensions were installed (let alone which extensions were watching which namespaces)? If a user is running the resolver, they would be blind to an installed extension that is watching their namespace if they don’t have permission to list extensions in the installation namespace. If a controller is running the resolver, then it might leak information to a user about installed extensions that the user is not otherwise entitled to know. 2. Even if (1) could be overcome, the lack of awareness of watched namespaces means that the resolver would have to make assumptions. If only one controller is installed, is it watching the right set of namespaces to meet the constraint? If multiple controllers are installed, are any of them watching the right set of namespaces? Without knowing the watched namespaces of the parent and child controllers, a correct dependency resolver implementation is not possible to implement. -Note that regardless of the ability of OLMv1 to perform dependency resolution (now or in the future), OLMv1 will not automatically install a missing dependency when a user requests an operator. The primary reasoning is that OLMv1 will err on the side of predictability and cluster-administrator awareness. +Note that regardless of the ability of OLM v1 to perform dependency resolution (now or in the future), OLM v1 will not automatically install a missing dependency when a user requests an operator. The primary reasoning is that OLM v1 will err on the side of predictability and cluster-administrator awareness. ### "Watch namespace"-aware operator discoverability @@ -50,7 +58,7 @@ When operators add APIs to a cluster, these APIs are globally visible. As stated Therefore, the API discoverability story boils down to answering this question for the user: “What APIs do I have access to in a given namespace?” Fortunately, built-in APIs exist to answer this question: Kubernetes Discovery, SelfSubjectRulesReview (SSRR), and SelfSubjectAccessReview (SSAR). -However, helping users discover which actual controllers will reconcile those APIs is not possible unless OLMv1 knows which namespaces those controllers are watching. +However, helping users discover which actual controllers will reconcile those APIs is not possible unless OLM v1 knows which namespaces those controllers are watching. Any solution here would be unaware of where a controller is actually watching and could only know “is there a controller installed that provides an implementation of this API?”. However even knowledge of a controller installation is not certain. Any user can use the discovery, SSRR, and SSAR. Not all users can list all Extensions (see [User discovery of “available” APIs](#user-discovery-of-available-apis)). @@ -59,19 +67,21 @@ Any solution here would be unaware of where a controller is actually watching an The multi-tenancy promises that OLMv0 made were false promises. Kubernetes is not multi-tenant with respect to management of APIs (because APIs are global). Any promise that OLMv0 has around multi-tenancy evaporates when true tenant isolation attempts are made, and any attempt to fix a broken promise is actually just a bandaid on an already broken assumption. So where do we go from here? There are multiple solutions that do not involve OLM implementing full multi-tenancy support, some or all of which can be explored. + 1. Customers transition to a control plane per tenant 2. Extension authors update their operators to support customers’ multi-tenancy use cases 3. Extension authors with “simple” lifecycling concerns transition to other packaging and deployment strategies (e.g. helm charts) ### Single-tenant control planes -One choice for customers would be to adopt low-overhead single-tenant control planes in which every tenant can have full control over their APIs and controllers and be truly isolated (at the control plane layer at least) from other tenants. With this option, the things OLMv1 cannot do (listed above) are irrelevant, because the purpose of all of those features is to support multi-tenant control planes in OLM. +One choice for customers would be to adopt low-overhead single-tenant control planes in which every tenant can have full control over their APIs and controllers and be truly isolated (at the control plane layer at least) from other tenants. With this option, the things OLM v1 cannot do (listed above) are irrelevant, because the purpose of all of those features is to support multi-tenant control planes in OLM. The [Kubernetes multi-tenancy docs](https://kubernetes.io/docs/concepts/security/multi-tenancy/#virtual-control-plane-per-tenant) contain a good overview of the options in this space. Kubernetes vendors may also have their own virtual control plane implementations. ### Shift multi-tenant responsibility to operators There is a set of operators that both (a) provide fully namespace-scoped workload-style operands and that (b) provide a large amount of value to their users for advanced features like backup and migration. For these operators, the Operator Framework program would suggest that they shift toward supporting multi-tenancy directly. That would involve: + 1. Taking extreme care to avoid API breaking changes. 2. Supporting multiple versions of their operands in a single version of the operator (if required by users in multi-tenant clusters). 3. Maintaining support for versioned operands for the same period of time that the operator is supported for a given cluster version. @@ -79,7 +89,7 @@ There is a set of operators that both (a) provide fully namespace-scoped workloa ### Operator authors ship controllers outside of OLM -Some projects have been successful delivering and supporting their operator on Kubernetes, but outside of OLM, for example with helm-packaged operators. On this path, individual layered project teams have more flexibility in solving lifecycling problems for their users because they are unencumbered by OLM’s opinions. However the tradeoff is that those project teams and their users take on responsibility and accountability for safe upgrades, automation, and multi-tenant architectures. With OLMv1 no longer attempting to support multi-tenancy in a first-class way, these tradeoffs change and project teams may decide that a different approach is necessary. +Some projects have been successful delivering and supporting their operator on Kubernetes, but outside of OLM, for example with helm-packaged operators. On this path, individual layered project teams have more flexibility in solving lifecycling problems for their users because they are unencumbered by OLM’s opinions. However the tradeoff is that those project teams and their users take on responsibility and accountability for safe upgrades, automation, and multi-tenant architectures. With OLM v1 no longer attempting to support multi-tenancy in a first-class way, these tradeoffs change and project teams may decide that a different approach is necessary. This path does not necessarily mean a scattering of content in various places. It would still be possible to provide customers with a marketplace of content (e.g. see https://artifacthub.io/). @@ -110,10 +120,11 @@ OLM constantly monitors the state of all on-cluster resources for all the operat ### CRD Upgrade Safety Checks Before OLM upgrades a CRD, OLM performs a set of safety checks to identify any changes that potentially would have negative impacts, such as: + - data loss - incompatible schema changes -These checks may not be a guarantee that an upgrade is safe; instead, they are intended to provide an early warning sign for identifiable incompatibilities. False positives (OLMv1 claims a breaking change when there is none) and false negatives (a breaking change makes it through the check without being caught) are possible, at least while the OLMv1 team iterates on this feature. +These checks may not be a guarantee that an upgrade is safe; instead, they are intended to provide an early warning sign for identifiable incompatibilities. False positives (OLM v1 claims a breaking change when there is none) and false negatives (a breaking change makes it through the check without being caught) are possible, at least while the OLM v1 team iterates on this feature. ### User permissions management @@ -124,61 +135,65 @@ Also note that user permission management does not unlock operator discoverabili ### User discovery of “available” APIs In the future, the Operator Framework team could explore building an API similar to SelfSubjectAccessReview and SelfSubjectRulesReview that answers the question: -“What is the public metadata of all of the extensions that are installed on the cluster that provide APIs that I have permission for in namespace X?” +“What is the public metadata of all extensions that are installed on the cluster that provide APIs that I have permission for in namespace X?” One solution would be to join “installed extensions with user permissions”. If an installed extension provides an API that a user has RBAC permission for, that extension would be considered available to that user in that scope. This solution would not be foolproof: it makes the (reasonable) assumption that an administrator only configures RBAC for a user in a namespace where a controller is reconciling that object. If an administrator gives a user RBAC access to an API without also configuring that controller to watch the namespace that they have access to, the discovery solution would report an available extension, but then nothing would actually reconcile the object they create. This solution would tell users about API-only and API+controller bundles that are installed. It would not tell users about controller-only bundles, because they do not include APIs. -Other similar API-centric solutions could be explored as well. For example, pursuing enhancements to OLMv1 or core Kubernetes related to API metadata and/or grouping. +Other similar API-centric solutions could be explored as well. For example, pursuing enhancements to OLM v1 or core Kubernetes related to API metadata and/or grouping. -A key note here is that controller-specific metadata like the version of the controller that will reconcile the object in a certain namespace is not necessary for discovery. Discovery is primarily about driving user flows around presenting information and example usage of a group of APIs such that CLIs and UIs can provide rich experiences around interactions with available APIs. +A key insight here is that controller-specific metadata like the version of the controller that will reconcile the object in a certain namespace is not necessary for discovery. Discovery is primarily about driving user flows around presenting information and example usage of a group of APIs such that CLIs and UIs can provide rich experiences around interactions with available APIs. ## Approach -We will adhere to the following tenets in our approach for the design and implementation of OLMv1 +We will adhere to the following tenets in our approach for the design and implementation of OLM v1 ### Do not fight Kubernetes One of the key features of cloud-native applications/extensions/operators is that they typically come with a Kubernetes-based API (e.g. CRD) and a controller that reconciles instances of that API. In Kubernetes, API registration is cluster-scoped. It is not possible to register different APIs in different namespaces. Instances of an API can be cluster- or namespace-scoped. All APIs are global (they can be invoked/accessed regardless of namespace). For cluster-scoped APIs, the names of their instances must be unique. For example, it’s possible to have Nodes named “one” and “two”, but it’s not possible to have multiple Nodes named “two”. For namespace-scoped APIs, the names of their instances must be unique per namespace. The following illustrates this for ConfigMaps, a namespace-scoped API: Allowed + - Namespace: test, name: my-configmap - Namespace: other, name: my-configmap Disallowed + - Namespace: test, name: my-configmap - Namespace: test, name: my-configmap In cases where OLMv0 decides that joint ownership of CRDs will not impact different tenants, OLMv0 allows multiple installations of bundles that include the same named CRD, and OLMv0 itself manages the CRD lifecycle. This has security implications because it requires OLMv0 to act as a deputy, but it also pits OLM against the limitations of the Kubernetes API. OLMv0 promises that different versions of an operator can be installed in the cluster for use by different tenants without tenants being affected by each other. This is not a promise OLM can make because it is not possible to have multiple versions of the same CRD present on a cluster for different tenants. -In OLMv1, we will not design the core APIs and controllers around this promise. Instead, we will build an API where ownership of installed objects is not shared. Managed objects are owned by exactly one extension. +In OLM v1, we will not design the core APIs and controllers around this promise. Instead, we will build an API where ownership of installed objects is not shared. Managed objects are owned by exactly one extension. This pattern is generic, aligns with the Kubernetes API, and makes multi-tenancy a possibility, but not a guarantee or core concept. We will explore the implications of this design on existing OLMv0 registry+v1 bundles as part of the larger v0 to v1 migration design. For net new content, operator authors that intend multiple installations of operator on the same cluster would need to package their components to account for this ownership rule. Generally, this would entail separation along these lines: - CRDs, conversion webhook workloads, and admission webhook configurations and workloads, APIServices and workloads. - Controller workloads, service accounts, RBAC, etc. -OLMv1 will include primitives (e.g. templating) to make it possible to have multiple non-conflicting installations of bundles. +OLM v1 will include primitives (e.g. templating) to make it possible to have multiple non-conflicting installations of bundles. -However it should be noted that the purpose of these primitives is not to enable multi-tenancy. It is to enable administrators to provide configuration for the installation of an extension. The fact that operators can be packaged as separate bundles and parameterized in a way that permits multiple controller installations is incidental, and not something that OLMv1 will encourage or promote. +However, it should be noted that the purpose of these primitives is not to enable multi-tenancy. It is to enable administrators to provide configuration for the installation of an extension. The fact that operators can be packaged as separate bundles and parameterized in a way that permits multiple controller installations is incidental, and not something that OLM v1 will encourage or promote. ### Make OLM secure by default OLMv0 runs as cluster-admin, which is a security concern. OLMv0 has optional security controls for operator installations via the OperatorGroup, which allows a user with permission to create or update them to also set a ServiceAccount that will be used for authorization purposes on operator installations and upgrades in that namespace. If a ServiceAccount is not explicitly specified, OLM’s cluster-admin credentials are used. Another avenue that cluster administrators have is to lock down permissions and usage of the CatalogSource API, disable default catalogs, and provide tenants with custom vetted catalogs. However if a cluster admin is not aware of these options, the default configuration of a cluster results in users with permission to create a Subscription in namespaces that contain an OperatorGroup effectively have cluster-admin, because OLMv0 has unlimited permissions to install any bundle available in the default catalogs and the default community catalog is not vetted for limited RBAC. Because OLMv0 is used to install more RBAC and run arbitrary workloads, there are numerous potential vectors that attackers could exploit. While there are no known exploits and there has not been any specific concern reported from customers, we believe CNCF’s reputation rest on secure cloud-native software and that this is a non-negotiable area to improve. To make OLM secure by default: -- OLMv1 will not be granted cluster admin permissions. Instead it will require service accounts provided by users to actually install, upgrade, and delete content. In addition to the security this provides, it also fulfills one of OLM’s long-standing requirements: halt when bundle upgrades require additional permissions and wait until those permissions are granted. -- OLMv1 will use secure communication protocols between all internal components and between itself and its clients. + +- OLM v1 will not be granted cluster admin permissions. Instead, it will require service accounts provided by users to actually install, upgrade, and delete content. In addition to the security this provides, it also fulfills one of OLM’s long-standing requirements: halt when bundle upgrades require additional permissions and wait until those permissions are granted. +- OLM v1 will use secure communication protocols between all internal components and between itself and its clients. ### Simple and predictable semantics for install, upgrade, and delete OLMv0 has grown into a complex web of functionality that is difficult to understand, even for seasoned Kubernetes veterans. -In OLMv1 we will move to GitOps-friendly APIs that allow administrators to rely on their experience with conventional Kubernetes API behavior (declarative, eventually consistent) to manage operator lifecycles. +In OLM v1 we will move to GitOps-friendly APIs that allow administrators to rely on their experience with conventional Kubernetes API behavior (declarative, eventually consistent) to manage operator lifecycles. -OLMv1 will reduce its API surface down to two primary APIs that represent catalogs of content, and intent for that content to be installed on the cluster. +OLM v1 will reduce its API surface down to two primary APIs that represent catalogs of content, and intent for that content to be installed on the cluster. + +OLM v1 will: -OLMv1 will: - Permit administrators to pin to specific versions, channels, version ranges, or combinations of both. - Permit administrators to pause management of an installation for maintenance or troubleshooting purposes. - Put opinionated guardrails up by default (e.g. follow operator developer-defined upgrade edges). @@ -188,53 +203,58 @@ OLMv1 will: ### APIs and behaviors to handle common controller patterns OLMv0 takes an extremely opinionated stance on the contents of the bundles it installs and in the way that operators can be lifecycled. The original designers believed these opinions would keep OLM’s scope limited and that they encompassed best practices for operator lifecycling. Some of these opinions are: + - All bundles must include a ClusterServiceVersion, which ostensibly gives operator authors an API that they can use to fully describe how to run the operator, what permissions it requires, what APIs it provides, and what metadata to show to users. - Bundles cannot contain arbitrary objects. OLMv0 needs to have specific handling for each resource that it supports. - Cluster administrators cannot override OLM safety checks around CRD changes or upgrades. -OLMv1 will take a slightly different approach: -- It will not require bundles to have very specific controller-centric shapes. OLMv1 will happily install a bundle that contains a deployment, service, and ingress or a bundle that contains a single configmap. +OLM v1 will take a slightly different approach: + +- It will not require bundles to have very specific controller-centric shapes. OLM v1 will happily install a bundle that contains a deployment, service, and ingress or a bundle that contains a single configmap. - However for bundles that do include CRDs, controllers, RBAC, webhooks, and other objects that relate to the behavior of the apiserver, OLM will continue to have opinions and special handling: - CRD upgrade checks (best effort) - Specific knowledge and handling of webhooks. -- To the extent necessary OLMv1 will include optional controller-centric concepts in its APIs and or CLIs in order to facilitate the most common controller patterns. Examples could include: +- To the extent necessary OLM v1 will include optional controller-centric concepts in its APIs and or CLIs in order to facilitate the most common controller patterns. Examples could include: - Permission management - CRD upgrade check policies -- OLMv1 will continue to have opinions about upgrade traversals and CRD changes that help users prevent accidental breakage, but it will also allow administrators to disable safeguards and proceed anyway. +- OLM v1 will continue to have opinions about upgrade traversals and CRD changes that help users prevent accidental breakage, but it will also allow administrators to disable safeguards and proceed anyway. -OLMv0 has some support for automatic upgrades. However administrators cannot control the maximum version for automatic upgrades, and the upgrade policy (manual vs automatic) applies to all operators in a namespace. If any operator’s upgrade policy is manual, all upgrades of all operators in the namespace must be approved manually. +OLMv0 has some support for automatic upgrades. However, administrators cannot control the maximum version for automatic upgrades, and the upgrade policy (manual vs automatic) applies to all operators in a namespace. If any operator’s upgrade policy is manual, all upgrades of all operators in the namespace must be approved manually. -OLMv1 will have fine-grained control for version ranges (and pins) and for controlling automatic upgrades for individual operators regardless of the policy of other operators installed in the same namespace. +OLM v1 will have fine-grained control for version ranges (and pins) and for controlling automatic upgrades for individual operators regardless of the policy of other operators installed in the same namespace. ### Constraint checking (but not automated on-cluster management) OLMv0 includes support for dependency and constraint checking for many common use cases (e.g. required and provided APIs, required cluster version, required package versions). It also has other constraint APIs that have not gained traction (e.g. CEL expressions and compound constraints). In addition to its somewhat hap-hazard constraint expression support, OLMv0 also automatically installs dependency trees, which has proven problematic in several respects: + 1. OLMv0 can resolve existing dependencies from outside the current namespace, but it can only install new dependencies in the current namespace. One scenario where this is problematic is if A depends on B, where A supports only OwnNamespace mode and B supports only AllNamespace mode. In that case, OLMv0’s auto dependency management fails because B cannot be installed in the same namespace as A (assuming the OperatorGroup in that namespace is configured for OwnNamespace operators to work). 2. OLMv0’s logic for choosing a dependency among multiple contenders is confusing and error-prone, and an administrator’s ability to have fine-grained control of upgrades is essentially limited to building and deploying tailor-made catalogs. 3. OLMv0 automatically installs dependencies. The only way for an administrator to avoid this OLMv0 functionality is to fully understand the dependency tree in advance and to then install dependencies from the leaves to the root so that OLMv0 always detects that dependencies are already met. If OLMv0 installs a dependency for you, it does not uninstall it when it is no longer depended upon. -OLMv1 will not provide dependency resolution among packages in the catalog (see [Dependencies based on watched namespaces](#dependencies-based-on-watched-namespaces)) +OLM v1 will not provide dependency resolution among packages in the catalog (see [Dependencies based on watched namespaces](#dependencies-based-on-watched-namespaces)) -OLMv1 will provide constraint checking based on available cluster state. Constraint checking will be limited to checking whether the existing constraints are met. If so, install proceeds. If not, unmet constraints will be reported and the install/upgrade waits until constraints are met. +OLM v1 will provide constraint checking based on available cluster state. Constraint checking will be limited to checking whether the existing constraints are met. If so, install proceeds. If not, unmet constraints will be reported and the install/upgrade waits until constraints are met. -The Operator Framework team will perform a survey of registry+v1 packages that currently rely on OLMv0’s dependency features and will suggest a solution as part of the overall OLMv0 to OLMv1 migration effort. +The Operator Framework team will perform a survey of registry+v1 packages that currently rely on OLMv0’s dependency features and will suggest a solution as part of the overall OLMv0 to OLM v1 migration effort. ### Client libraries and CLIs contribute to the overall UX OLMv0 has no official client libraries or CLIs that can be used to augment its functionality or provide a more streamlined user experience. The kubectl "operator" plugin was developed several years ago, but has never been a focus of the core Operator Framework development team, and has never factored into the overall architecture. -OLMv1 will deliver an official CLI (likely by overhauling the kubectl operator plugin) and will use it to meet requirements that are difficult or impossible to implement in a controller, or where an architectural assessment dictates that a client is the better choice. This CLI would automate standard workflows over cluster APIs to facilitate simple administrative actions (e.g. automatically create RBAC and ServiceAccounts necessary for an extension installation as an optional step in the CLI’s extension install experience). +OLM v1 will deliver an official CLI (likely by overhauling the kubectl operator plugin) and will use it to meet requirements that are difficult or impossible to implement in a controller, or where an architectural assessment dictates that a client is the better choice. This CLI would automate standard workflows over cluster APIs to facilitate simple administrative actions (e.g. automatically create RBAC and ServiceAccounts necessary for an extension installation as an optional step in the CLI’s extension install experience). The official CLI will provide administrators and users with a UX that covers the most common scenarios users will encounter. -The official CLI will explicitly NOT attempt to cover complex scenarios. Maintainers will reject requests to over-complicate the CLI. Users with advanced use cases will be able to directly interact with OLMv1’s on-cluster APIs. +The official CLI will explicitly NOT attempt to cover complex scenarios. Maintainers will reject requests to over-complicate the CLI. Users with advanced use cases will be able to directly interact with OLM v1’s on-cluster APIs. The idea is: + - On-cluster APIs can be used to manage operators in 100% of cases (assuming bundle content is structured in a compatible way) - The official CLI will cover standard user flows, covering ~80% of use cases. - Third-party or unofficial CLIs will cover the remaining ~20% of use cases. Areas where the official CLI could provide value include: + - Catalog interactions (search, list, inspect, etc.) - Standard install/upgrade/delete commands - Upgrade previews diff --git a/docs/refs/supported-extensions.md b/docs/project/olmv1_limitations.md similarity index 85% rename from docs/refs/supported-extensions.md rename to docs/project/olmv1_limitations.md index 8a1e97c02..172d8cbb5 100644 --- a/docs/refs/supported-extensions.md +++ b/docs/project/olmv1_limitations.md @@ -1,8 +1,15 @@ +--- +hide: + - toc +--- + +## OLM v0 Extension Support + Currently, OLM v1 supports installing cluster extensions that meet the following criteria: * The extension must support installation via the `AllNamespaces` install mode. * The extension must not use webhooks. -* The extension must not declare dependencies using the any of following file-based catalog properties: +* The extension must not declare dependencies using any of the following file-based catalog properties: * `olm.gvk.required` * `olm.package.required` diff --git a/docs/olmv1_roadmap.md b/docs/project/olmv1_roadmap.md similarity index 99% rename from docs/olmv1_roadmap.md rename to docs/project/olmv1_roadmap.md index 23bcc5d96..5a0542a3e 100644 --- a/docs/olmv1_roadmap.md +++ b/docs/project/olmv1_roadmap.md @@ -1,7 +1,6 @@ --- -title: Product Requriement Doc -layout: default -nav_order: 2 +hide: + - toc --- # OLM v1 roadmap diff --git a/docs/Tasks/adding-a-catalog.md b/docs/tutorials/add-catalog.md similarity index 98% rename from docs/Tasks/adding-a-catalog.md rename to docs/tutorials/add-catalog.md index 8158f0d4a..c0961d561 100644 --- a/docs/Tasks/adding-a-catalog.md +++ b/docs/tutorials/add-catalog.md @@ -1,4 +1,9 @@ -# Adding a catalog of extensions to a cluster +--- +hide: + - toc +--- + +# Add a Catalog of Extensions to a Cluster Extension authors can publish their products in catalogs. ClusterCatalogs are curated collections of Kubernetes extensions, such as Operators. diff --git a/docs/drafts/downgrading-an-extension.md b/docs/tutorials/downgrade-extension.md similarity index 99% rename from docs/drafts/downgrading-an-extension.md rename to docs/tutorials/downgrade-extension.md index c372ce8e2..0e57d4687 100644 --- a/docs/drafts/downgrading-an-extension.md +++ b/docs/tutorials/downgrade-extension.md @@ -1,3 +1,7 @@ +--- +hide: + - toc +--- # Downgrade a ClusterExtension diff --git a/docs/Tasks/exploring-available-packages.md b/docs/tutorials/explore-available-content.md similarity index 96% rename from docs/Tasks/exploring-available-packages.md rename to docs/tutorials/explore-available-content.md index eb3e1499a..2364501c1 100644 --- a/docs/Tasks/exploring-available-packages.md +++ b/docs/tutorials/explore-available-content.md @@ -1,6 +1,11 @@ -# Exploring Available Packages +--- +hide: + - toc +--- -After you add a catalog of extensions to your cluster, you must port forward your catalog as a service. +# Explore Available Content + +After you [add a catalog of extensions](add-catalog.md) to your cluster, you must port forward your catalog as a service. Then you can query the catalog by using `curl` commands and the `jq` CLI tool to find extensions to install. ## Prerequisites @@ -143,4 +148,4 @@ The following examples will show this default behavior, but for simplicity's sak ### Additional resources -* [Catalog queries](../refs/catalog-queries.md) +* [Catalog queries](../howto/catalog-queries.md) diff --git a/docs/Tasks/installing-an-extension.md b/docs/tutorials/install-extension.md similarity index 80% rename from docs/Tasks/installing-an-extension.md rename to docs/tutorials/install-extension.md index 1458a1653..ffd28519f 100644 --- a/docs/Tasks/installing-an-extension.md +++ b/docs/tutorials/install-extension.md @@ -1,4 +1,9 @@ -# Installing an extension from a catalog +--- +hide: + - toc +--- + +# Install an Extension from a Catalog In Operator Lifecycle Manager (OLM) 1.0, Kubernetes extensions are scoped to the cluster. After you add a catalog to your cluster, you can install an extension by creating a custom resource (CR) and applying it. @@ -6,9 +11,22 @@ After you add a catalog to your cluster, you can install an extension by creatin ## Prerequisites * A deployed and unpacked catalog -* The name, and optionally version, or channel, of the [supported extension](../concepts/supported-extensions.md) to be installed +* The name, and optionally version, or channel, of the [supported extension](../project/olmv1_limitations.md) to be installed * An existing namespace in which to install the extension -* A suitable service account for installation (more information can be found [here](../drafts/Tasks/create-installer-service-account.md)) + +### ServiceAccount for ClusterExtension Installation and Management + +Adhering to OLM v1's "Secure by Default" tenet, OLM v1 does not have the permissions +necessary to install content. This follows the least privilege principle and reduces +the chance of a [confused deputy attack](https://en.wikipedia.org/wiki/Confused_deputy_problem). +Instead, users must explicitly specify a ServiceAccount that will be used to perform the +installation and management of a specific ClusterExtension. + +The ServiceAccount must be configured with the RBAC permissions required by the ClusterExtension. +If the permissions do not meet the minimum requirements, installation will fail. If no ServiceAccount +is provided in the ClusterExtension manifest, then the manifest will be rejected. + +For information on determining the ServiceAccount's permission, please see [Derive minimal ServiceAccount required for ClusterExtension Installation and Management](../howto/derive-service-account.md). ## Procedure diff --git a/docs/Tasks/uninstalling-an-extension.md b/docs/tutorials/uninstall-extension.md similarity index 94% rename from docs/Tasks/uninstalling-an-extension.md rename to docs/tutorials/uninstall-extension.md index 575a7602a..3d20442a8 100644 --- a/docs/Tasks/uninstalling-an-extension.md +++ b/docs/tutorials/uninstall-extension.md @@ -1,4 +1,9 @@ -# Deleting an extension +--- +hide: + - toc +--- + +# Uninstall an extension You can uninstall a Kubernetes extension and its associated custom resource definitions (CRD) by deleting the extension's custom resource (CR). diff --git a/docs/drafts/Tasks/upgrading-an-extension.md b/docs/tutorials/upgrade-extension.md similarity index 94% rename from docs/drafts/Tasks/upgrading-an-extension.md rename to docs/tutorials/upgrade-extension.md index ec13c7317..e55a53f96 100644 --- a/docs/drafts/Tasks/upgrading-an-extension.md +++ b/docs/tutorials/upgrade-extension.md @@ -1,17 +1,22 @@ -# Upgrading an Extension +--- +hide: + - toc +--- + +# Upgrade an Extension Existing extensions can be upgraded by updating the version field in the ClusterExtension resource. -For information on downgrading an extension, see [Downgrade an Extension](../downgrading-an-extension.md). +For information on downgrading an extension, see [Downgrade an Extension](downgrade-extension.md). ## Prerequisites * You have an extension installed -* The target version is compatible with OLM v1 (see [OLM v1 limitations](../refs/olmv1-limitations.md)) -* CRD compatibility between the versions being upgraded or downgraded (see [CRD upgrade safety](../../refs/crd-upgrade-safety.md)) -* The installer service account's RBAC permissions are adequate for the target version (see [Minimal RBAC for Installer Service Account](create-installer-service-account.md)) +* The target version is compatible with OLM v1 (see [OLM v1 limitations](../project/olmv1_limitations.md)) +* CRD compatibility between the versions being upgraded or downgraded (see [CRD upgrade safety](../concepts/crd-upgrade-safety.md)) +* The installer service account's RBAC permissions are adequate for the target version (see [Minimal RBAC for Installer Service Account](../howto/derive-service-account.md)) -For more detailed information see [Upgrade Support](../upgrade-support.md). +For more detailed information see [Upgrade Support](../concepts/upgrade-support.md). ## Procedure diff --git a/mkdocs.yml b/mkdocs.yml index cc95662a3..7680fd461 100644 --- a/mkdocs.yml +++ b/mkdocs.yml @@ -1,28 +1,57 @@ # yaml-language-server: $schema=https://squidfunk.github.io/mkdocs-material/schema.json -site_name: Operator Controller documentation +site_name: Operator Lifecycle Manager theme: - name: "material" - features: - - content.code.copy + logo: assets/logo.svg + name: "material" + palette: + primary: black + features: + - content.code.copy + - navigation.top +# - navigation.tabs + - navigation.indexes repo_url: https://github.com/operator-framework/operator-controller +extra_css: + - css/extra.css + nav: - - Home: 'index.md' - - Components: 'components.md' - - Tasks: - - Adding a catalog of extensions: 'Tasks/adding-a-catalog.md' - - Finding extensions to install: 'Tasks/exploring-available-packages.md' - - Installing an extension: 'Tasks/installing-an-extension.md' - - Deleting an extension: 'Tasks/uninstalling-an-extension.md' - - References: - - Supported extensions: 'refs//supported-extensions.md' - - API references: - - Operator Controller API reference: 'refs/api/operator-controller-api-reference.md' - - CatalogD API reference: 'refs/api/catalogd-api-reference.md' - - Catalog queries: 'refs/catalog-queries.md' - - CRD Upgrade Safety: 'refs/crd-upgrade-safety.md' + - Overview: + - index.md + - Community: project/olmv1_community.md + - Architecture: project/olmv1_architecture.md + - Design Decisions: project/olmv1_design_decisions.md + - Limitations: project/olmv1_limitations.md + - Roadmap: project/olmv1_roadmap.md + - Getting Started: getting-started/olmv1_getting_started.md + - Tutorials: + - Add a Catalog: tutorials/add-catalog.md + - Explore Content: tutorials/explore-available-content.md + - Install an Extension: tutorials/install-extension.md + - Upgrade an Extension: tutorials/upgrade-extension.md + - Downgrade an Extension: tutorials/downgrade-extension.md + - Uninstall an Extension: tutorials/uninstall-extension.md + - How-To Guides: + - Catalog queries: howto/catalog-queries.md + - Channel-Based Upgrades: howto/how-to-channel-based-upgrades.md + - Version Pinning: howto/how-to-pin-version.md + - Version Range Upgrades: howto/how-to-version-range-upgrades.md + - Z-Stream Upgrades: howto/how-to-z-stream-upgrades.md + - Derive Service Account Permissions: howto/derive-service-account.md + - Conceptual Guides: + - Single Owner Objects: concepts/single-owner-objects.md + - Upgrade Support: concepts/upgrade-support.md + - CRD Upgrade Safety: concepts/crd-upgrade-safety.md + - Content Resolution: concepts/controlling-catalog-selection.md + - Version Ranges: concepts/version-ranges.md + - API Reference: + - Operator Controller API reference: api-reference/operator-controller-api-reference.md + - CatalogD API reference: api-reference/catalogd-api-reference.md + - Contribute: + - Contributing: contribute/contributing.md + - Developing OLM v1: contribute/developer.md markdown_extensions: - pymdownx.highlight: