Skip to content

Commit

Permalink
Merge pull request #67809 from openshift-cherrypick-robot/cherry-pick…
Browse files Browse the repository at this point in the history
…-67751-to-enterprise-4.15

[enterprise-4.15] OBSDOCS-603: Update attributes and additional improvements - part 3
  • Loading branch information
abrennan89 authored Nov 13, 2023
2 parents c3681c0 + 3095f04 commit 8ea584b
Show file tree
Hide file tree
Showing 14 changed files with 25 additions and 18 deletions.
4 changes: 2 additions & 2 deletions logging/log_collection_forwarding/log-forwarding.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -7,7 +7,7 @@ include::_attributes/attributes-openshift-dedicated.adoc[]

toc::[]

The Cluster Logging Operator deploys a collector based on the `ClusterLogForwarder` resource specification. There are two collector options supported by this Operator: the legacy Fluentd collector, and the Vector collector.
The {clo} deploys a collector based on the `ClusterLogForwarder` resource specification. There are two collector options supported by this Operator: the legacy Fluentd collector, and the Vector collector.

include::snippets/logging-fluentd-dep-snip.adoc[]

Expand Down Expand Up @@ -35,7 +35,7 @@ To use the multi log forwarder feature, you must create a service account and cl

[IMPORTANT]
====
In order to support multi log forwarding in additional namespaces other than the `openshift-logging` namespace, you must xref:../../logging/cluster-logging-upgrading.adoc#logging-operator-upgrading-all-ns_cluster-logging-upgrading[update the Cluster Logging Operator to watch all namespaces]. This functionality is supported by default in new Cluster Logging Operator version 5.8 installations.
In order to support multi log forwarding in additional namespaces other than the `openshift-logging` namespace, you must xref:../../logging/cluster-logging-upgrading.adoc#logging-operator-upgrading-all-ns_cluster-logging-upgrading[update the {clo} to watch all namespaces]. This functionality is supported by default in new {clo} version 5.8 installations.
====

include::modules/log-collection-rbac-permissions.adoc[leveloffset=+2]
Expand Down
2 changes: 1 addition & 1 deletion logging/logging_alerts/custom-logging-alerts.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -8,7 +8,7 @@ toc::[]

In logging 5.7 and later versions, users can configure the LokiStack deployment to produce customized alerts and recorded metrics. If you want to use customized link:https://grafana.com/docs/loki/latest/alert/[alerting and recording rules], you must enable the LokiStack ruler component.

LokiStack log-based alerts and recorded metrics are triggered by providing link:https://grafana.com/docs/loki/latest/query/[LogQL] expressions to the ruler component. The Loki Operator manages a ruler that is optimized for the selected LokiStack size, which can be `1x.extra-small`, `1x.small`, or `1x.medium`.
LokiStack log-based alerts and recorded metrics are triggered by providing link:https://grafana.com/docs/loki/latest/query/[LogQL] expressions to the ruler component. The {loki-op} manages a ruler that is optimized for the selected LokiStack size, which can be `1x.extra-small`, `1x.small`, or `1x.medium`.

To provide these expressions, you must create an `AlertingRule` custom resource (CR) containing Prometheus-compatible link:https://prometheus.io/docs/prometheus/latest/configuration/alerting_rules/[alerting rules], or a `RecordingRule` CR containing Prometheus-compatible link:https://prometheus.io/docs/prometheus/latest/configuration/recording_rules/[recording rules].

Expand Down
2 changes: 1 addition & 1 deletion logging/logging_alerts/default-logging-alerts.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -6,7 +6,7 @@ include::_attributes/common-attributes.adoc[]

toc::[]

Logging alerts are installed as part of the Cluster Logging Operator installation. Alerts depend on metrics exported by the log collection and log storage backends. These metrics are enabled if you selected the option to *Enable operator recommended cluster monitoring on this namespace* when installing the Cluster Logging Operator. For more information about installing logging Operators, see xref:../../logging/cluster-logging-deploying#cluster-logging-deploy-console_cluster-logging-deploying[Installing the {logging-title} using the web console].
Logging alerts are installed as part of the {clo} installation. Alerts depend on metrics exported by the log collection and log storage backends. These metrics are enabled if you selected the option to *Enable operator recommended cluster monitoring on this namespace* when installing the {clo}. For more information about installing logging Operators, see xref:../../logging/cluster-logging-deploying#cluster-logging-deploy-console_cluster-logging-deploying[Installing the {logging-title} using the web console].

Default logging alerts are sent to the {product-title} monitoring stack Alertmanager in the `openshift-monitoring` namespace, unless you have disabled the local Alertmanager instance.

Expand Down
2 changes: 1 addition & 1 deletion modules/cluster-logging-collector-log-forward-es.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -6,7 +6,7 @@ You can optionally forward logs to an external Elasticsearch instance in additio

To configure log forwarding to an external Elasticsearch instance, you must create a `ClusterLogForwarder` custom resource (CR) with an output to that instance, and a pipeline that uses the output. The external Elasticsearch output can use the HTTP (insecure) or HTTPS (secure HTTP) connection.

To forward logs to both an external and the internal Elasticsearch instance, create outputs and pipelines to the external instance and a pipeline that uses the `default` output to forward logs to the internal instance. You do not need to create a `default` output. If you do configure a `default` output, you receive an error message because the `default` output is reserved for the Red Hat OpenShift Logging Operator.
To forward logs to both an external and the internal Elasticsearch instance, create outputs and pipelines to the external instance and a pipeline that uses the `default` output to forward logs to the internal instance. You do not need to create a `default` output. If you do configure a `default` output, you receive an error message because the `default` output is reserved for the {clo}.

[NOTE]
====
Expand Down
3 changes: 2 additions & 1 deletion modules/cluster-logging-collector-log-forward-gcp.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -14,7 +14,8 @@ Using this feature with Fluentd is not supported.
====

.Prerequisites
* {logging-title-uc} Operator 5.5.1 and later

* {clo} 5.5.1 and later
.Procedure

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -19,7 +19,7 @@ _output_:: The destination for log data that you define, or where you want the l
* `kafka`. A Kafka broker. The `kafka` output can use a TCP or TLS connection.
* `default`. The internal {product-title} Elasticsearch instance. You are not required to configure the default output. If you do configure a `default` output, you receive an error message because the `default` output is reserved for the Red Hat OpenShift Logging Operator.
* `default`. The internal {product-title} Elasticsearch instance. You are not required to configure the default output. If you do configure a `default` output, you receive an error message because the `default` output is reserved for the {clo}.
--
+
_pipeline_:: Defines simple routing from one log type to one or more outputs, or which logs you want to send. The log types are one of the following:
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -3,7 +3,7 @@
[id="cluster-logging-troubleshooting-log-forwarding_{context}"]
= Troubleshooting log forwarding

When you create a `ClusterLogForwarder` custom resource (CR), if the Red Hat OpenShift Logging Operator does not redeploy the Fluentd pods automatically, you can delete the Fluentd pods to force them to redeploy.
When you create a `ClusterLogForwarder` custom resource (CR), if the {clo} does not redeploy the Fluentd pods automatically, you can delete the Fluentd pods to force them to redeploy.

.Prerequisites

Expand Down
6 changes: 6 additions & 0 deletions modules/configuring-logging-loki-ruler.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -10,6 +10,12 @@ When the LokiStack ruler component is enabled, users can define a group of link:

Administrators can enable the ruler by modifying the `LokiStack` custom resource (CR).

.Prerequisites

* You have installed the {clo} and the {loki-op}.
* You have created a `LokiStack` CR.
* You have administrator permissions.
.Procedure

* Enable the ruler by ensuring that the `LokiStack` CR contains the following spec configuration:
Expand Down
4 changes: 2 additions & 2 deletions modules/log-collection-rbac-permissions.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -6,13 +6,13 @@
[id="log-collection-rbac-permissions_{context}"]
= Authorizing log collection RBAC permissions

In logging 5.8 and later, the Cluster Logging Operator provides `collect-audit-logs`, `collect-application-logs`, and `collect-infrastructure-logs` cluster roles, which enable the collector to collect audit logs, application logs, and infrastructure logs respectively.
In logging 5.8 and later, the {clo} provides `collect-audit-logs`, `collect-application-logs`, and `collect-infrastructure-logs` cluster roles, which enable the collector to collect audit logs, application logs, and infrastructure logs respectively.

You can authorize RBAC permissions for log collection by binding the required cluster roles to a service account.

.Prerequisites

* The Cluster Logging Operator is installed in the `openshift-logging` namespace.
* The {clo} is installed in the `openshift-logging` namespace.
* You have administrator permissions.
.Procedure
Expand Down
2 changes: 1 addition & 1 deletion modules/logging-enabling-loki-alerts.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -24,7 +24,7 @@ The `AlertingRule` CR contains a set of specifications and webhook validation de

.Prerequisites

* {logging-title-uc} Operator 5.7 and later
* {clo} 5.7 and later
* {product-title} 4.13 and later
.Procedure
Expand Down
4 changes: 2 additions & 2 deletions modules/logging-forward-splunk.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -13,8 +13,8 @@ Using this feature with Fluentd is not supported.
====

.Prerequisites
* Red Hat OpenShift Logging Operator 5.6 and higher
* ClusterLogging instance with vector specified as collector
* {clo} 5.6 or later
* A `ClusterLogging` instance with `vector` specified as the collector
* Base64 encoded Splunk HEC token
.Procedure
Expand Down
4 changes: 2 additions & 2 deletions modules/logging-http-forward.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -5,11 +5,11 @@
[id="logging-deploy-loki-console_{context}"]
= Forwarding logs over HTTP

Forwarding logs over HTTP is supported for both fluentd and vector collectors. To enable, specify `http` as the output type in the `ClusterLogForwarder` custom resource (CR).
Forwarding logs over HTTP is supported for both the Fluentd and Vector log collectors. To enable, specify `http` as the output type in the `ClusterLogForwarder` custom resource (CR).

.Procedure

* Create or edit the ClusterLogForwarder Custom Resource (CR) using the template below:
* Create or edit the `ClusterLogForwarder` CR using the template below:
.Example ClusterLogForwarder CR
[source,yaml]
Expand Down
2 changes: 1 addition & 1 deletion modules/loki-rbac-permissions.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -13,7 +13,7 @@ Administrators can allow users to create and manage their own alerting rules by

.Prerequisites

* The Cluster Logging Operator is installed in the `openshift-logging` namespace.
* The {clo} is installed in the `openshift-logging` namespace.
* You have administrator permissions.
.Procedure
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -7,7 +7,7 @@
[id="rosa-cluster-logging-collector-log-forward-sts-cloudwatch_{context}"]
= Forwarding logs to Amazon CloudWatch from STS enabled clusters

For clusters with AWS Security Token Service (STS) enabled, create the AWS IAM roles and policies that will allow the log forwarding, and a `ClusterLogForwarder` custom resource (CR) with an output for CloudWatch.
For clusters with AWS Security Token Service (STS) enabled, you must create the AWS IAM roles and policies that enable log forwarding, and a `ClusterLogForwarder` custom resource (CR) with an output for CloudWatch.

.Prerequisites

Expand Down Expand Up @@ -111,7 +111,7 @@ $ aws iam attach-role-policy \
+
<1> Replace `policy_ARN` with the output you saved while creating the policy.

. Create a `Secret` YAML file for the logging operator:
. Create a `Secret` YAML file for the {clo}:
+
--
[source,yaml]
Expand Down

0 comments on commit 8ea584b

Please sign in to comment.