Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
2 changes: 2 additions & 0 deletions modules/aws-cluster-installation-options-aws-lzs.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -14,9 +14,11 @@ endif::[]
[id="aws-cluster-installation-options-aws-lzs_{context}"]
ifdef::local-zone[]
= Cluster installation options for an AWS Local Zones environment

endif::local-zone[]
ifdef::wavelength-zone[]
= Cluster installation options for an AWS Wavelength Zones environment

endif::wavelength-zone[]

Choose one of the following installation options to install an {product-title} cluster on AWS with edge compute nodes defined in {zone-type}:
Expand Down
1 change: 1 addition & 0 deletions modules/byoh-removal.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -5,6 +5,7 @@
:_mod-docs-content-type: PROCEDURE
[id="removing-byoh-windows-instance"]
= Removing BYOH Windows instances

You can remove BYOH instances attached to the cluster by deleting the instance's entry in the config map. Deleting an instance reverts that instance back to its state prior to adding to the cluster. Any logs and container runtime artifacts are not added to these instances.

For an instance to be cleanly removed, it must be accessible with the current private key provided to WMCO. For example, to remove the `10.1.42.1` instance from the previous example, the config map would be changed to the following:
Expand Down
1 change: 1 addition & 0 deletions modules/cli-installing-cli-web-console-macos.adoc
Original file line number Diff line number Diff line change
@@ -1,6 +1,7 @@
:_mod-docs-content-type: PROCEDURE
[id="cli-installing-cli-web-console-macos_{context}"]
= Installing the OpenShift CLI on macOS using the web console

ifeval::["{context}" == "updating-restricted-network-cluster"]
:restricted:
endif::[]
Expand Down
1 change: 1 addition & 0 deletions modules/compliance-crd-advanced-compliance-scan.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -5,6 +5,7 @@
:_mod-docs-content-type: CONCEPT
[id="advance-compliance-scan-object_{context}"]
= Advanced ComplianceScan Object

The Compliance Operator includes options for advanced users for debugging or integrating with existing tooling. While it is recommended that you not create a `ComplianceScan` object directly, you can instead manage it using a `ComplianceSuite` object.

.Example Advanced `ComplianceScan` object
Expand Down
1 change: 1 addition & 0 deletions modules/compliance-crd-compliance-check-result.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -5,6 +5,7 @@
:_mod-docs-content-type: CONCEPT
[id="compliance-check-result_{context}"]
= ComplianceCheckResult object

When you run a scan with a specific profile, several rules in the profiles are verified. For each of these rules, a `ComplianceCheckResult` object is created, which provides the state of the cluster for a specific rule.

.Example `ComplianceCheckResult` object
Expand Down
1 change: 1 addition & 0 deletions modules/compliance-crd-compliance-remediation.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -5,6 +5,7 @@
:_mod-docs-content-type: CONCEPT
[id="compliance-remediation-object_{context}"]
= ComplianceRemediation object

For a specific check you can have a datastream specified fix. However, if a Kubernetes fix is available, then the Compliance Operator creates a `ComplianceRemediation` object.

.Example `ComplianceRemediation` object
Expand Down
1 change: 1 addition & 0 deletions modules/compliance-crd-compliance-suite.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -5,6 +5,7 @@
:_mod-docs-content-type: CONCEPT
[id="compliance-suite-object_{context}"]
= ComplianceSuite object

The `ComplianceSuite` object helps you keep track of the state of the scans. It contains the raw settings to create scans and the overall result.

For `Node` type scans, you should map the scan to the `MachineConfigPool`, since it contains the remediations for any issues. If you specify a label, ensure it directly applies to a pool.
Expand Down
1 change: 1 addition & 0 deletions modules/compliance-crd-profile-bundle.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -5,6 +5,7 @@
:_mod-docs-content-type: CONCEPT
[id="profile-bundle-object_{context}"]
= ProfileBundle object

When you install the Compliance Operator, it includes ready-to-run `ProfileBundle` objects. The Compliance Operator parses the `ProfileBundle` object and creates a `Profile` object for each profile in the bundle. It also parses `Rule` and `Variable` objects, which are used by the `Profile` object.


Expand Down
1 change: 1 addition & 0 deletions modules/compliance-crd-rule.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -5,6 +5,7 @@
:_mod-docs-content-type: CONCEPT
[id="rule-object_{context}"]
= Rule object

The `Rule` object, which forms the profiles, are also exposed as objects. Use the `Rule` object to define your compliance check requirements and specify how it could be fixed.

.Example `Rule` object
Expand Down
1 change: 1 addition & 0 deletions modules/compliance-crd-scan-setting.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -5,6 +5,7 @@
:_mod-docs-content-type: CONCEPT
[id="scan-setting-object_{context}"]
= ScanSetting object

Use the `ScanSetting` object to define and reuse the operational policies to run your scans.
By default, the Compliance Operator creates the following `ScanSetting` objects:

Expand Down
1 change: 1 addition & 0 deletions modules/compliance-custom-storage.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -5,6 +5,7 @@
:_mod-docs-content-type: PROCEDURE
[id="compliance-custom-storage_{context}"]
= Setting custom storage size for results

While the custom resources such as `ComplianceCheckResult` represent an aggregated result of one check across all scanned nodes, it can be useful to review the raw results as produced by the scanner. The raw results are produced in the ARF format and can be large (tens of megabytes per node), it is impractical to store them in a Kubernetes resource backed by the `etcd` key-value store. Instead, every scan creates a persistent volume (PV) which defaults to 1GB size. Depending on your environment, you may want to increase the PV size accordingly. This is done using the `rawResultStorage.size` attribute that is exposed in both the `ScanSetting` and `ComplianceScan` resources.

A related parameter is `rawResultStorage.rotation` which controls how many scans are retained in the PV before the older scans are rotated. The default value is 3, setting the rotation policy to 0 disables the rotation. Given the default rotation policy and an estimate of 100MB per a raw ARF scan report, you can calculate the right PV size for your environment.
Expand Down
1 change: 1 addition & 0 deletions modules/compliance-inconsistent.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -5,6 +5,7 @@
:_mod-docs-content-type: PROCEDURE
[id="compliance-inconsistent_{context}"]
= Inconsistent ComplianceScan

The `ScanSetting` object lists the node roles that the compliance scans generated from the `ScanSetting` or `ScanSettingBinding` objects would scan. Each node role usually maps to a machine config pool.

[IMPORTANT]
Expand Down
1 change: 1 addition & 0 deletions modules/compliance-raw-tailored.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -5,6 +5,7 @@
:_mod-docs-content-type: PROCEDURE
[id="compliance-raw-tailored_{context}"]
= Using raw tailored profiles

While the `TailoredProfile` CR enables the most common tailoring operations, the XCCDF standard allows even more flexibility in tailoring OpenSCAP profiles. In addition, if your organization has been using OpenScap previously, you may have an existing XCCDF tailoring file and can reuse it.

The `ComplianceSuite` object contains an optional `TailoringConfigMap` attribute that you can point to a custom tailoring file. The value of the `TailoringConfigMap` attribute is a name of a config map which must contain a key called `tailoring.xml` and the value of this key is the tailoring contents.
Expand Down
1 change: 1 addition & 0 deletions modules/compliance-removing-kubeletconfig.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -5,6 +5,7 @@
:_mod-docs-content-type: PROCEDURE
[id="compliance-removing-kubeletconfig_{context}"]
= Removing a KubeletConfig remediation

`KubeletConfig` remediations are included in node-level profiles. In order to remove a KubeletConfig remediation, you must manually remove it from the `KubeletConfig` objects. This example demonstrates how to remove the compliance check for the `one-rule-tp-node-master-kubelet-eviction-thresholds-set-hard-imagefs-available` remediation.

.Procedure
Expand Down
1 change: 1 addition & 0 deletions modules/compliance-rescan.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -5,6 +5,7 @@
:_mod-docs-content-type: PROCEDURE
[id="compliance-rescan_{context}"]
= Performing a rescan

Typically you will want to re-run a scan on a defined schedule, like every Monday or daily. It can also be useful to re-run a scan once after fixing a problem on a node. To perform a single scan, annotate the scan with the `compliance.openshift.io/rescan=` option:

[source,terminal]
Expand Down
1 change: 1 addition & 0 deletions modules/compliance-tailored-profiles.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -5,6 +5,7 @@
:_mod-docs-content-type: PROCEDURE
[id="compliance-tailored-profiles_{context}"]
= Using tailored profiles to extend existing ProfileBundles

While the `TailoredProfile` CR enables the most common tailoring operations, the XCCDF standard allows even more flexibility in tailoring OpenSCAP profiles. In addition, if your organization has been using OpenScap previously, you may have an existing XCCDF tailoring file and can reuse it.

The `ComplianceSuite` object contains an optional `TailoringConfigMap` attribute that you can point to a custom tailoring file. The value of the `TailoringConfigMap` attribute is a name of a config map, which must contain a key called `tailoring.xml` and the value of this key is the tailoring contents.
Expand Down
1 change: 1 addition & 0 deletions modules/compliance-unapplying.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -5,6 +5,7 @@
:_mod-docs-content-type: PROCEDURE
[id="compliance-unapplying_{context}"]
= Unapplying a remediation

It might be required to unapply a remediation that was previously applied.

.Procedure
Expand Down
1 change: 1 addition & 0 deletions modules/configuring-machine-pool-disk-volume-ocm.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -6,6 +6,7 @@
ifdef::openshift-rosa[]
[id="configuring-machine-pool-disk-volume-ocm_{context}"]
= Configuring machine pool disk volume using OpenShift Cluster Manager

endif::openshift-rosa[]
.Prerequisite for cluster creation
* You have the option to select the node disk sizing for the default machine pool during cluster installation.
Expand Down
1 change: 1 addition & 0 deletions modules/configuring-vsphere-regions-zones.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -9,6 +9,7 @@
:_mod-docs-content-type: PROCEDURE
[id="configuring-vsphere-regions-zones_{context}"]
= Configuring regions and zones for a VMware vCenter

You can modify the default installation configuration file, so that you can deploy an {product-title} cluster to multiple vSphere data centers.

The default `install-config.yaml` file configuration from the previous release of {product-title} is deprecated. You can continue to use the deprecated default configuration, but the `openshift-installer` will prompt you with a warning message that indicates the use of deprecated fields in the configuration file.
Expand Down
1 change: 1 addition & 0 deletions modules/containers-signature-verify-application.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -5,6 +5,7 @@
:_mod-docs-content-type: PROCEDURE
[id="containers-signature-verify-application_{context}"]
= Verifying the signature verification configuration

After you apply the machine configs to the cluster, the Machine Config Controller detects the new `MachineConfig` object and generates a new `rendered-worker-<hash>` version.

.Prerequisites
Expand Down
1 change: 1 addition & 0 deletions modules/containers-signature-verify-enable.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -5,6 +5,7 @@
:_mod-docs-content-type: PROCEDURE
[id="containers-signature-verify-enable_{context}"]
= Enabling signature verification for Red Hat Container Registries

Enabling container signature validation for Red Hat Container Registries requires writing a signature verification policy file specifying the keys to verify images from these registries. For RHEL8 nodes, the registries are already defined in `/etc/containers/registries.d` by default.

.Procedure
Expand Down
1 change: 1 addition & 0 deletions modules/containers-signature-verify-unsigned.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -5,6 +5,7 @@
:_mod-docs-content-type: CONCEPT
[id="containers-signature-verify-artifacts_{context}"]
= Understanding the verification of container images lacking verifiable signatures

Each {product-title} release image is immutable and signed with a Red Hat production key. During an {product-title} update or installation, a release image might deploy container images that do not have verifiable signatures. Each signed release image digest is immutable. Each reference in the release image is to the immutable digest of another image, so the contents can be trusted transitively. In other words, the signature on the release image validates all release contents.

For example, the image references lacking a verifiable signature are contained in the signed {product-title} release image:
Expand Down
1 change: 1 addition & 0 deletions modules/cpmso-feat-auto-update.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -12,6 +12,7 @@ endif::[]
:_mod-docs-content-type: CONCEPT
[id="cpmso-feat-auto-update_{context}"]
= Automatic updates to the control plane configuration

//Not for ROSA/OSD:
ifndef::openshift-dedicated,openshift-rosa[]
The `RollingUpdate` update strategy automatically propagates changes to your control plane configuration.
Expand Down
1 change: 1 addition & 0 deletions modules/cpmso-yaml-failure-domain-openstack.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -5,6 +5,7 @@
:_mod-docs-content-type: REFERENCE
[id="cpmso-yaml-failure-domain-openstack_{context}"]
= Sample {rh-openstack} failure domain configuration

// TODO: Replace that link.
The control plane machine set concept of a failure domain is analogous to the existing {rh-openstack-first} concept of an link:https://docs.openstack.org/nova/latest/admin/availability-zones.html[availability zone]. The `ControlPlaneMachineSet` CR spreads control plane machines across multiple failure domains when possible.

Expand Down
2 changes: 2 additions & 0 deletions modules/creating-a-machine-pool-ocm.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -8,9 +8,11 @@
[id="creating_machine_pools_ocm_{context}"]
ifndef::openshift-rosa,openshift-rosa-hcp[]
= Creating a machine pool

endif::openshift-rosa,openshift-rosa-hcp[]
ifdef::openshift-rosa,openshift-rosa-hcp[]
= Creating a machine pool using OpenShift Cluster Manager

endif::openshift-rosa,openshift-rosa-hcp[]

ifndef::openshift-rosa,openshift-rosa-hcp[]
Expand Down
1 change: 1 addition & 0 deletions modules/creating-custom-seccomp-profile.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -5,6 +5,7 @@
:_mod-docs-content-type: PROCEDURE
[id="creating-custom-seccomp-profile_{context}"]
= Creating seccomp profiles

You can use the `MachineConfig` object to create profiles.

Seccomp can restrict system calls (syscalls) within a container, limiting the access of your application.
Expand Down
1 change: 1 addition & 0 deletions modules/deleting-machine-pools-cli.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -5,6 +5,7 @@
:_mod-docs-content-type: PROCEDURE
[id="deleting-machine-pools-cli_{context}"]
= Deleting a machine pool using the ROSA CLI

You can delete a machine pool for your {product-title} cluster by using the {rosa-cli-first}.

[NOTE]
Expand Down
2 changes: 2 additions & 0 deletions modules/deleting-machine-pools-ocm.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -8,9 +8,11 @@
[id="deleting-machine-pools-ocm_{context}"]
ifndef::openshift-rosa,openshift-rosa-hcp[]
= Deleting a machine pool

endif::openshift-rosa,openshift-rosa-hcp[]
ifdef::openshift-rosa,openshift-rosa-hcp[]
= Deleting a machine pool using {cluster-manager}

endif::openshift-rosa,openshift-rosa-hcp[]

You can delete a machine pool for your {product-title} cluster by using {cluster-manager-first}.
Expand Down
1 change: 1 addition & 0 deletions modules/enable-public-cluster.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -5,6 +5,7 @@
:_mod-docs-content-type: PROCEDURE
[id="enable-public-cluster_{context}"]
= Enabling an existing private cluster to be public

// TODO: These wordings of "enabling the cluster "to be public/private" could probably be improved. At the very least, these two modules should probably use "Configuring" instead of "Enabling", as it is worded now.

After a private cluster has been created, you can later enable the cluster to be public.
Expand Down
2 changes: 2 additions & 0 deletions modules/hcp-bm-ingress.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -11,10 +11,12 @@ endif::[]
[id="hcp-bm-ingress_{context}"]
ifndef::non-bm[]
= Handling ingress in a hosted cluster on bare metal

endif::non-bm[]

ifdef::non-bm[]
= Handling ingress in a hosted cluster on non-bare-metal agent machines

endif::non-bm[]

Every {product-title} cluster has a default application Ingress Controller that typically has an external DNS record associated with it. For example, if you create a hosted cluster named `example` with the base domain `krnl.es`, you can expect the wildcard domain `*.apps.example.krnl.es` to be routable.
Expand Down
2 changes: 2 additions & 0 deletions modules/hcp-bm-machine-health-disable.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -11,10 +11,12 @@ endif::[]
[id="hcp-bm-machine-health-disable_{context}"]
ifndef::non-bm[]
= Disabling machine health checks on bare metal

endif::non-bm[]

ifdef::non-bm[]
= Disabling machine health checks on non-bare-metal agent machines

endif::non-bm[]


Expand Down
2 changes: 2 additions & 0 deletions modules/hcp-bm-machine-health.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -11,10 +11,12 @@ endif::[]
[id="hcp-bm-machine-health_{context}"]
ifndef::non-bm[]
= Enabling machine health checks on bare metal

endif::non-bm[]

ifdef::non-bm[]
= Enabling machine health checks on non-bare-metal agent machines

endif::non-bm[]

You can enable machine health checks on bare metal to repair and replace unhealthy managed cluster nodes automatically. You must have additional agent machines that are ready to install in the managed cluster.
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -5,6 +5,7 @@
:_mod-docs-content-type: PROCEDURE
[id="hcp-ibm-power-create-heterogeneous-nodepools-agent-hc_{context}"]
= Creating the AgentServiceConfig custom resource

To create heterogeneous node pools on an agent hosted cluster, you need to create the `AgentServiceConfig` CR with two heterogeneous architecture operating system (OS) images.

.Procedure
Expand Down
2 changes: 2 additions & 0 deletions modules/install-creating-install-config-aws-edge-zones.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -13,9 +13,11 @@ endif::[]
[id="install-creating-install-config-aws-edge-zones_{context}"]
ifdef::local-zone[]
= Modifying an installation configuration file to use AWS Local Zones

endif::local-zone[]
ifdef::wavelength-zone[]
= Modifying an installation configuration file to use AWS Wavelength Zones

endif::wavelength-zone[]

Modify an `install-config.yaml` file to include AWS {zone-type}.
Expand Down
2 changes: 2 additions & 0 deletions modules/install-sno-installing-sno-on-azure.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -6,9 +6,11 @@
[id="installing-sno-on-azure_{context}"]
ifndef::openshift-origin[]
= Installing {sno} on Azure

endif::openshift-origin[]
ifdef::openshift-origin[]
= Installing {sno-okd} on Azure

endif::openshift-origin[]

Installing a single node cluster on Azure requires installer-provisioned installation using the "Installing a cluster on Azure with customizations" procedure.
2 changes: 2 additions & 0 deletions modules/install-sno-installing-sno-on-gcp.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -6,9 +6,11 @@
[id="installing-sno-on-gcp_{context}"]
ifndef::openshift-origin[]
= Installing {sno} on {gcp-short}

endif::openshift-origin[]
ifdef::openshift-origin[]
= Installing {sno-okd} on {gcp-short}

endif::openshift-origin[]

Installing a single node cluster on {gcp-short} requires installer-provisioned installation using the "Installing a cluster on {gcp-short} with customizations" procedure.
Original file line number Diff line number Diff line change
Expand Up @@ -6,9 +6,11 @@
[id="supported-cloud-providers-for-single-node-openshift_{context}"]
ifndef::openshift-origin[]
= Supported cloud providers for {sno}

endif::openshift-origin[]
ifdef::openshift-origin[]
= Supported cloud providers for {sno-okd}

endif::openshift-origin[]

The following table contains a list of supported cloud providers and CPU architectures.
Expand Down
2 changes: 2 additions & 0 deletions modules/installation-aws-about-government-region.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -14,9 +14,11 @@ endif::[]
[id="installation-aws-about-gov-secret-region_{context}"]
ifdef::aws-gov[]
= AWS government regions

endif::aws-gov[]
ifdef::aws-secret[]
= AWS secret regions

endif::aws-secret[]

ifdef::aws-gov[]
Expand Down
3 changes: 3 additions & 0 deletions modules/installation-aws-add-zone-locations.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -18,12 +18,15 @@ endif::[]
[id="installation-aws-add-zone-locations_{context}"]
ifdef::local-zone[]
= Opting in to an AWS {zone-type}

endif::local-zone[]
ifdef::wavelength-zone[]
= Opting in to an AWS {zone-type}

endif::wavelength-zone[]
ifdef::post-aws-zones[]
= Opting in to AWS Local Zones or Wavelength Zones

endif::post-aws-zones[]

If you plan to create subnets in AWS {zone-type}, you must opt in to each zone group separately.
Expand Down
1 change: 1 addition & 0 deletions modules/installation-aws-marketplace-government.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -5,6 +5,7 @@
:_mod-docs-content-type: PROCEDURE
[id="installation-aws-marketplace-government_{context}"]
= Installation requirements for government regions

If you are deploying an {product-title} cluster using an AWS Marketplace image in a government region, you must first subscribe through {aws-short}. Subscribing to the offer provides you with the AMI ID that the installation program uses to deploy compute nodes.

:platform-abbreviation: an AWS
Expand Down
1 change: 1 addition & 0 deletions modules/installation-aws-marketplace-subscribe.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -25,6 +25,7 @@ endif::[]
:_mod-docs-content-type: PROCEDURE
[id="installation-aws-marketplace-subscribe_{context}"]
= Obtaining an AWS Marketplace image

If you are deploying an {product-title} cluster using an AWS Marketplace image, you must first subscribe through AWS. Subscribing to the offer provides you with the AMI ID that the installation program uses to deploy compute nodes.

:platform-abbreviation: an AWS
Expand Down
1 change: 1 addition & 0 deletions modules/installation-aws-permissions-iam-shared-vpc.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -4,6 +4,7 @@
:_mod-docs-content-type: REFERENCE
[id="installation-aws-permissions-iam-shared-vpc_{context}"]
= Modifying trust policy when installing into a shared VPC

If you install your cluster using a shared VPC, you can use the `Passthrough` or `Manual` credentials mode. You must add the IAM role used to install the cluster as a principal in the trust policy of the account that owns the VPC.

If you use `Passthrough` mode, add the Amazon Resource Name (ARN) of the account that creates the cluster, such as `arn:aws:iam::123456789012:user/clustercreator`, to the trust policy as a principal.
Expand Down
Loading