Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

RHDEVDOCS-2676 Document "Expose ability to forward logs only from spe… #32334

Merged
merged 1 commit into from
May 11, 2021
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
11 changes: 2 additions & 9 deletions logging/cluster-logging-exported-fields.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -5,16 +5,9 @@ include::modules/common-attributes.adoc[]

toc::[]

These are the fields exported by the logging system and available for searching from Elasticsearch and Kibana. Use the full, dotted field name when searching. For example, for an Elasticsearch */_search URL*, to look for a Kubernetes pod name, use `/_search/q=kubernetes.pod_name:name-of-my-pod`.


These are the fields exported by the logging system and available for searching
from Elasticsearch and Kibana. Use the full, dotted field name when searching.
For example, for an Elasticsearch */_search URL*, to look for a Kubernetes pod name,
use `/_search/q=kubernetes.pod_name:name-of-my-pod`.

The following sections describe fields that may not be present in your logging store.
Not all of these fields are present in every record.
The fields are grouped in the following categories:
The following sections describe fields that may not be present in your logging store. Not all of these fields are present in every record. The fields are grouped in the following categories:

* `exported-fields-Default`
* `exported-fields-systemd`
Expand Down
9 changes: 5 additions & 4 deletions logging/cluster-logging-external.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -5,13 +5,13 @@ include::modules/common-attributes.adoc[]

toc::[]

By default, OpenShift Logging sends container and infrastructure logs to the default internal Elasticsearch log store defined in the `ClusterLogging` custom resource. However, it does not send audit logs to the internal store because it does not provide secure storage. If this default configuration meets your needs, you do not need to configure the Log Forwarding API.
By default, OpenShift Logging sends container and infrastructure logs to the default internal Elasticsearch log store defined in the `ClusterLogging` custom resource. However, it does not send audit logs to the internal store because it does not provide secure storage. If this default configuration meets your needs, you do not need to configure the Cluster Log Forwarder.

To send logs to other log aggregators, you use the {product-title} Log Forwarding API. This API enables you to send container, infrastructure, and audit logs to specific endpoints within or outside your cluster. You can send different types of logs to various systems, so different individuals can access each type. You can also enable TLS support to send logs securely, as required by your organization.
To send logs to other log aggregators, you use the {product-title} Cluster Log Forwarder. This API enables you to send container, infrastructure, and audit logs to specific endpoints within or outside your cluster. You can send different types of logs to various systems, so various individuals can access each type. You can also enable TLS support to send logs securely, as required by your organization.

[NOTE]
====
To send audit logs to the internal log store, use the Log Forwarding API as described in xref:../logging/config/cluster-logging-log-store.adoc#cluster-logging-elasticsearch-audit_cluster-logging-store[Forward audit logs to the log store].
To send audit logs to the internal log store, use the Cluster Log Forwarder as described in xref:../logging/config/cluster-logging-log-store.adoc#cluster-logging-elasticsearch-audit_cluster-logging-store[Forward audit logs to the log store].
====

When you forward logs externally, the Cluster Logging Operator creates or modifies a Fluentd config map to send logs using your desired protocols. You are responsible for configuring the protocol on the external log aggregator.
Expand All @@ -20,7 +20,7 @@ Alternatively, you can create a config map to use the xref:../logging/cluster-lo

[IMPORTANT]
====
You cannot use the config map methods and the Log Forwarding API in the same cluster.
You cannot use the config map methods and the Cluster Log Forwarder in the same cluster.
====

// The following include statements pull in the module files that comprise
Expand All @@ -35,5 +35,6 @@ include::modules/cluster-logging-collector-log-forward-fluentd.adoc[leveloffset=
include::modules/cluster-logging-collector-log-forward-syslog.adoc[leveloffset=+1]
include::modules/cluster-logging-collector-log-forward-kafka.adoc[leveloffset=+1]
include::modules/cluster-logging-collector-log-forward-project.adoc[leveloffset=+1]
include::modules/cluster-logging-collector-log-forward-logs-from-application-pods.adoc[leveloffset=+1]
include::modules/cluster-logging-collector-legacy-fluentd.adoc[leveloffset=+1]
include::modules/cluster-logging-collector-legacy-syslog.adoc[leveloffset=+1]
Original file line number Diff line number Diff line change
@@ -0,0 +1,55 @@
[id="cluster-logging-collector-log-forward-logs-from-application-pods_{context}"]
= Forwarding application logs from specific pods

As a cluster administrator, you can use Kubernetes pod labels to gather log data from specific pods and forward it to a log collector.

Suppose that you have an application composed of pods running alongside other pods in various namespaces. If those pods have labels that identify the application, you can gather and output their log data to a specific log collector.

.Procedure

. Create a `ClusterLogForwarder` custom resource (CR) YAML file.

. In the YAML file, specify the pod labels using simple equality-based selectors under `inputs[].name.application.selector.matchLabels`, as shown in the following example.
+
.Example `ClusterLogForwarder` CR YAML file
[source,yaml]
----
apiVersion: logging.openshift.io/v1
kind: ClusterLogForwarder
metadata:
name: instance <1>
namespace: openshift-logging <2>
spec:
pipelines:
- inputRefs: [ myAppLogData ] <3>
outputRefs: [ default ] <4>
inputs:
- name: myAppLogData
application:
selector:
matchLabels:
environment: production <5>
app: nginx <5>
namespaces: <6>
- app1
- app2
outputs: <7>
- default <8>
...
----
<1> The name of the `ClusterLogForwarder` CR must be `instance`.
<2> The namespace for the `ClusterLogForwarder` CR must be `openshift-logging`.
<3> Specify the input for the pipeline.
<4> Specify the output for the pipeline.
<5> Specify the labels of pods whose log data you want to gather.
<6> Optional: Specify one or more namespaces.
<7> Specify the output to forward your log data to. The optional `default` output shown here sends log data to the internal Elasticsearch instance.

. Optional: To restrict the gathering of log data to specific namespaces, use `inputs[].name.application.namespaces`, as shown in the preceding example.

. Create the CR object:
+
[source,terminal]
----
$ oc create -f <file-name>.yaml
----
4 changes: 2 additions & 2 deletions modules/cluster-logging-collector-log-forward-project.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -5,7 +5,7 @@
[id="cluster-logging-collector-log-forward-project_{context}"]
= Forwarding application logs from specific projects

You can use the Log forwarding API to send a copy of the application logs from specific projects to an external log aggregator instead of, or in addition to, the default Elasticsearch log store. You are responsible for configuring the external log aggregator to receive the logs from {product-title}.
You can use the Cluster Log Forwarder to send a copy of the application logs from specific projects to an external log aggregator instead of, or in addition to, the default Elasticsearch log store. You are responsible for configuring the external log aggregator to receive the logs from {product-title}.

To configure forwarding application logs from a project, create a `ClusterLogForwarder` custom resource (CR) with at least one input from a project, optional outputs for other log aggregators, and pipelines that use those inputs and outputs.

Expand Down Expand Up @@ -59,7 +59,7 @@ spec:
<3> Specify a name for the output.
<4> Specify the output type: `elasticsearch`, `fluentdForward`, `syslog`, or `kafka`.
<5> Specify the URL and port of the external log aggregator as a valid absolute URL. If the cluster-wide proxy using the CIDR annotation is enabled, the output must be a server name or FQDN, not an IP address.
<6> If using a `tls` prefix, you must specify the name of the secret required by the endpoint for TLS communication. The secret must exist in the `openshift-logging` project and must have keys of: *tls.crt*, *tls.key*, and *ca-bundle.crt* that point to the respective certificates that they represent.
<6> If using a `tls` prefix, you must specify the name of the secret required by the endpoint for TLS communication. The secret must exist in the `openshift-logging` project and have *tls.crt*, *tls.key*, and *ca-bundle.crt* keys that each point to the certificates they represent.
<7> Configuration for an input to filter application logs from the specified projects.
<8> Configuration for a pipeline to use the input to send project application logs to an external Fluentd instance.
<9> The `my-app-logs` input.
Expand Down