Skip to content

Commit

Permalink
OSD cloud agnostic updates.
Browse files Browse the repository at this point in the history
  • Loading branch information
eohartman authored and openshift-cherrypick-robot committed May 17, 2024
1 parent 0accea6 commit b00a494
Show file tree
Hide file tree
Showing 17 changed files with 53 additions and 26 deletions.
2 changes: 1 addition & 1 deletion authentication/bound-service-account-tokens.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -6,7 +6,7 @@ include::_attributes/common-attributes.adoc[]

toc::[]

You can use bound service account tokens, which improves the ability to integrate with cloud provider identity access management (IAM) services, such as AWS IAM.
You can use bound service account tokens, which improves the ability to integrate with cloud provider identity access management (IAM) services, such as {product-title} on AWS IAM or Google Cloud Platform IAM.

// About bound service account tokens
include::modules/bound-sa-tokens-about.adoc[leveloffset=+1]
Expand Down
7 changes: 2 additions & 5 deletions modules/cluster-logging-cloudwatch.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -6,12 +6,9 @@
// can be re-used in associated products.

:_mod-docs-content-type: CONCEPT
[id="cluster-logging-cloudwatch_{context}"]
= CloudWatch recommendation for {product-title}

Red Hat recommends that you use the AWS CloudWatch solution for your logging needs.

[id="cluster-logging-requirements-explained_{context}"]
== Logging requirements

Hosting your own logging stack requires a large amount of compute resources and storage, which might be dependent on your cloud service quota. The compute resource requirements can start at 48 GB or more, while the storage requirement can be as large as 1600 GB or more. The logging stack runs on your worker nodes, which reduces your available workload resource. With these considerations, hosting your own logging stack increases your cluster operating costs.
Hosting your own logging stack requires a large amount of compute resources and storage, which might be dependent on your cloud service quota. The compute resource requirements can start at 48 GB or more, while the storage requirement can be as large as 1600 GB or more. The logging stack runs on your worker nodes, which reduces your available workload resource. With these considerations, hosting your own logging stack increases your cluster operating costs.

4 changes: 2 additions & 2 deletions modules/insights-operator-what-information-is-collected.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -11,8 +11,8 @@ The following information is collected by the Insights Operator:
* Configuration files, such as the image registry configuration, of your cluster to determine incorrect settings and issues that are specific to parameters you set
* Errors that occur in the cluster components
* Progress information of running updates, and the status of any component upgrades
* Details of the platform that {product-title} is deployed on, such as Amazon Web Services, and the region that the cluster is located in
* Details of the platform that {product-title} is deployed on and the region that the cluster is located in
ifndef::openshift-dedicated[]
* Cluster workload information transformed into discreet Secure Hash Algorithm (SHA) values, which allows Red Hat to assess workloads for security and version vulnerabilities without disclosing sensitive details
endif::openshift-dedicated[]
* If an Operator reports an issue, information is collected about core {product-title} pods in the `openshift-*` and `kube-*` projects. This includes state, resource, security context, volume information, and more.
* If an Operator reports an issue, information is collected about core {product-title} pods in the `openshift-*` and `kube-*` projects. This includes state, resource, security context, volume information, and more
2 changes: 1 addition & 1 deletion modules/logging-create-loki-cr-cli.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -54,7 +54,7 @@ It is not possible to change the number `1x` for the deployment size.
// end::pre-5.9[]

// tag::5.9[]
+

.Example `LokiStack` CR
[source,yaml]
----
Expand Down
26 changes: 16 additions & 10 deletions modules/logging-loki-retention.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -8,9 +8,14 @@

With Logging version 5.6 and higher, you can configure retention policies based on log streams. Rules for these may be set globally, per tenant, or both. If you configure both, tenant rules apply before global rules.

[NOTE]
====
Although logging version 5.9 and higher supports schema v12, v13 is recommended.
====

. To enable stream-based retention, create a `LokiStack` custom resource (CR):
+
.Example global stream-based retention
.Example global stream-based retention for AWS
[source,yaml]
----
apiVersion: loki.grafana.com/v1
Expand All @@ -19,8 +24,8 @@ metadata:
name: logging-loki
namespace: openshift-logging
spec:
limits:
global: <1>
limits:
global: <1>
retention: <2>
days: 20
streams:
Expand All @@ -40,15 +45,16 @@ spec:
secret:
name: logging-loki-s3
type: aws
storageClassName: standard
storageClassName: gp3-csi
tenants:
mode: openshift-logging
----
<1> Sets retention policy for all log streams. *Note: This field does not impact the retention period for stored logs in object storage.*
<2> Retention is enabled in the cluster when this block is added to the CR.
<3> Contains the link:https://grafana.com/docs/loki/latest/logql/query_examples/#query-examples[LogQL query] used to define the log stream.
+
.Example per-tenant stream-based retention
<3> Contains the link:https://grafana.com/docs/loki/latest/logql/query_examples/#query-examples[LogQL query] used to define the log stream.spec:
limits:

.Example per-tenant stream-based retention for AWS
[source,yaml]
----
apiVersion: loki.grafana.com/v1
Expand Down Expand Up @@ -84,15 +90,15 @@ spec:
secret:
name: logging-loki-s3
type: aws
storageClassName: standard
storageClassName: gp3-csi
tenants:
mode: openshift-logging
----
<1> Sets retention policy by tenant. Valid tenant types are `application`, `audit`, and `infrastructure`.
<2> Contains the link:https://grafana.com/docs/loki/latest/logql/query_examples/#query-examples[LogQL query] used to define the log stream.

. Apply the `LokiStack` CR:
+
2 Apply the `LokiStack` CR:

[source,terminal]
----
$ oc apply -f <filename>.yaml
Expand Down
3 changes: 3 additions & 0 deletions modules/nodes-nodes-viewing-listing.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -101,6 +101,9 @@ For example:
$ oc describe node node1.example.com
----
+
include::snippets/osd-aws-example-only.adoc[]

.Example output
[source,text]
----
Expand Down
2 changes: 2 additions & 0 deletions modules/nodes-pods-pod-disruption-about.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -45,6 +45,8 @@ You can check for pod disruption budgets across all projects with the following:
$ oc get poddisruptionbudget --all-namespaces
----

include::snippets/osd-aws-example-only.adoc[]

.Example output
[source,terminal]
----
Expand Down
2 changes: 2 additions & 0 deletions modules/oc-adm-by-example-content.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -634,6 +634,8 @@ Initiate reboot of the specified MachineConfigPool.
== oc adm release extract
Extract the contents of an update payload to disk

include::snippets/osd-aws-example-only.adoc[]

.Example usage
[source,bash,options="nowrap"]
----
Expand Down
2 changes: 2 additions & 0 deletions modules/oc-by-example-content.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -1702,6 +1702,8 @@ Display information about an image
== oc image mirror
Mirror images from one repository to another

include::snippets/osd-aws-example-only.adoc[]

.Example usage
[source,bash,options="nowrap"]
----
Expand Down
2 changes: 1 addition & 1 deletion modules/rosa-cluster-autoscaler-ui-settings.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -56,7 +56,7 @@ The tables explain all the configurable UI settings when using cluster autoscali
|false

|`balancing-ignored-labels`
|This option specifies labels that the cluster autoscaler should ignore when considering node group similarity. For example, if you have nodes with a "topology.ebs.csi.aws.com/zone" label, you can add the name of this label to prevent the cluster autoscaler from splitting nodes into different node groups based on its value. This option cannot contain spaces.
|This option specifies labels that the cluster autoscaler should ignore when considering node group similarity. This option cannot contain spaces.
|`array (string)`
|Format should be a comma-separated list of labels.
|===
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -30,7 +30,9 @@ endif::openshift-rosa,openshift-dedicated[]
. Log in to a cluster.

. Run the following command, which queries a cluster's Prometheus service and returns the full set of time series data captured by Telemetry:
+

include::snippets/osd-aws-example-only.adoc[]

[source,terminal]
----
$ curl -G -k -H "Authorization: Bearer $(oc whoami -t)" \
Expand Down
2 changes: 1 addition & 1 deletion networking/cidr-range-definitions.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -17,7 +17,7 @@ Machine CIDR ranges cannot be changed after creating your cluster.
====

ifdef::openshift-rosa,openshift-dedicated[]
When specifying subnet CIDR ranges, ensure that the subnet CIDR range is within the defined Machine CIDR. You must verify that the subnet CIDR ranges allow for enough IP addresses for all intended workloads, including at least eight IP addresses for possible AWS Load Balancers.
When specifying subnet CIDR ranges, ensure that the subnet CIDR range is within the defined Machine CIDR. You must verify that the subnet CIDR ranges allow for enough IP addresses for all intended workloads depending on which platform the cluster is hosted.
endif::[]

[IMPORTANT]
Expand Down
2 changes: 1 addition & 1 deletion networking/network-verification.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -38,7 +38,7 @@ The network verification includes checks for each of the following requirements:
* The VPC has `enableDnsSupport` enabled.
* The VPC has `enableDnsHostnames` enabled.
ifdef::openshift-dedicated[]
* Egress is available to the required domain and port combinations that are specified in the xref:../osd_planning/aws-ccs.adoc#osd-aws-privatelink-firewall-prerequisites_aws-ccs[AWS firewall prerequisites] section.
* Egress is available to the required domain and port combinations for {product-title} (ROSA). For ROSA the required domain and port combinations are specified in the xref:../osd_planning/aws-ccs.adoc#osd-aws-privatelink-firewall-prerequisites_aws-ccs[AWS firewall prerequisites] section.
endif::openshift-dedicated[]
ifdef::openshift-rosa[]
* Egress is available to the required domain and port combinations that are specified in the xref:../rosa_planning/rosa-sts-aws-prereqs.adoc#osd-aws-privatelink-firewall-prerequisites_rosa-sts-aws-prereqs[AWS firewall prerequisites] section.
Expand Down
2 changes: 1 addition & 1 deletion nodes/cma/nodes-cma-autoscaling-custom-trigger-auth.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -117,7 +117,7 @@ spec:
----
<1> Specifies the namespace of the object you want to scale.
<2> Specifies that this trigger authentication uses a platform-native pod authentication method for authorization.
<3> Specifies a pod identity. Supported values are `none`, `azure`, `aws-eks`, or `aws-kiam`. The default is `none`.
<3> Specifies a pod identity. Supported values are `none`, `azure`, `gcp`, `aws-eks`, or `aws-kiam`. The default is `none`.

// Remove ifdef after https://github.com/openshift/openshift-docs/pull/62147 merges
ifndef::openshift-rosa,openshift-dedicated[]
Expand Down
3 changes: 1 addition & 2 deletions observability/logging/cluster-logging.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -30,8 +30,7 @@ include::modules/cluster-logging-about.adoc[leveloffset=+1]

ifdef::openshift-rosa,openshift-dedicated[]
include::modules/cluster-logging-cloudwatch.adoc[leveloffset=+1]
.Next steps
* See xref:../../observability/logging/log_collection_forwarding/configuring-log-forwarding.adoc#cluster-logging-collector-log-forward-cloudwatch_configuring-log-forwarding[Forwarding logs to Amazon CloudWatch] for instructions.
For information, see xref:../../observability/logging/log_collection_forwarding/log-forwarding.adoc#about-log-collection_log-forwarding[About log collection and forwarding].
endif::[]

include::modules/cluster-logging-json-logging-about.adoc[leveloffset=+2]
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -30,3 +30,4 @@ The `fluentdForward` output is only supported if you are using the Fluentd colle
====
`syslog`:: An external log aggregation solution that supports the syslog link:https://tools.ietf.org/html/rfc3164[RFC3164] or link:https://tools.ietf.org/html/rfc5424[RFC5424] protocols. The `syslog` output can use a UDP, TCP, or TLS connection.
`cloudwatch`:: Amazon CloudWatch, a monitoring and log storage service hosted by Amazon Web Services (AWS).
`cloudlogging`:: Google Cloud Logging, a monitoring and log storage service hosted by Google Cloud Platform (GCP).
13 changes: 13 additions & 0 deletions snippets/osd-aws-example-only.adoc
Original file line number Diff line number Diff line change
@@ -0,0 +1,13 @@
//Text snippet appears in the following modules:
//
// * ../modules/telemetry-showing-data-collected-from-the-cluster.adoc
// * ../modules/oc-adm-by-example-content.adoc
// * ../modules/nodes-pods-pod-disruption-about.adoc
// * ../modules/oc-by-example-content.adoc

:_mod-docs-content-type: SNIPPET

[NOTE]
====
The following example contains some values that are specific to {product-title} on AWS.
====

0 comments on commit b00a494

Please sign in to comment.