Skip to content

Commit

Permalink
Merge pull request openshift#13315 from mburke5678/logging-move-311-c…
Browse files Browse the repository at this point in the history
…hanges-to-40

Adding changes to 3.11 docs to 4.0
  • Loading branch information
mburke5678 authored Jan 14, 2019
2 parents 3553539 + e39c48b commit ee93b11
Show file tree
Hide file tree
Showing 16 changed files with 393 additions and 168 deletions.
4 changes: 2 additions & 2 deletions _topic_map.yml
Original file line number Diff line number Diff line change
Expand Up @@ -152,8 +152,6 @@ Topics:
File: efk-logging-deploy
- Name: Uninstalling the EFK stack
File: efk-logging-uninstall
- Name: Troubleshooting Kubernetes
File: efk-logging-troubleshooting
- Name: Working with Elasticsearch
File: efk-logging-elasticsearch
- Name: Working with Fluentd
Expand All @@ -170,5 +168,7 @@ Topics:
File: efk-logging-manual-rollout
- Name: Configuring systemd-journald and rsyslog
File: efk-logging-systemd
- Name: Troubleshooting Kubernetes
File: efk-logging-troubleshooting
- Name: Exported fields
File: efk-logging-exported-fields
6 changes: 5 additions & 1 deletion logging/efk-logging-elasticsearch.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -14,7 +14,11 @@ toc::[]

include::modules/efk-logging-elasticsearch-ha.adoc[leveloffset=+1]

include::modules/efk-logging-elasticsearch-persistent-storage.adoc[leveloffset=+1]
include::modules/efk-logging-elasticsearch-persistent-storage-about.adoc[leveloffset=+1]

include::modules/efk-logging-elasticsearch-persistent-storage-persistent.adoc[leveloffset=+2]

include::modules/efk-logging-elasticsearch-persistent-storage-local.adoc[leveloffset=+2]

include::modules/efk-logging-elasticsearch-scaling.adoc[leveloffset=+1]

Expand Down
8 changes: 8 additions & 0 deletions logging/efk-logging-fluentd.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -13,6 +13,14 @@ toc::[]
// assemblies.


include::modules/efk-logging-fluentd-pod-location.adoc[leveloffset=+1]

include::modules/efk-logging-fluentd-log-viewing.adoc[leveloffset=+1]

include::modules/efk-logging-fluentd-log-location.adoc[leveloffset=+1]

include::modules/efk-logging-fluentd-log-rotation.adoc[leveloffset=+1]

include::modules/efk-logging-external-fluentd.adoc[leveloffset=+1]

include::modules/efk-logging-fluentd-connections.adoc[leveloffset=+1]
Expand Down
2 changes: 1 addition & 1 deletion modules/efk-logging-about-fluentd.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -7,7 +7,7 @@

{product-title} uses Fluentd to collect data about your cluster.

Fluentd is deployed as a DaemonSet in {product-title} that deploys replicas according to a node
Fluentd is deployed as a DaemonSet in {product-title} that deploys nodes according to a node
label selector, which you can specify with the inventory parameter
`openshift_logging_fluentd_nodeselector` and the default is `logging-infra-fluentd`.
As part of the OpenShift cluster installation, it is recommended that you add the
Expand Down
21 changes: 1 addition & 20 deletions modules/efk-logging-deploy-pre.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -22,7 +22,7 @@ various areas of the EFK stack.
+
.. Ensure that you have deployed a router for the cluster.
+
** Ensure that you have the necessary persistent storage for Elasticsearch. Note that each Elasticsearch replica
** Ensure that you have the necessary persistent storage for Elasticsearch. Note that each Elasticsearch node
requires its own storage volume.

. Specify a node selector
Expand All @@ -34,22 +34,3 @@ node selector should be used.
$ oc adm new-project logging --node-selector=""
----

* Choose a project.
+
Once deployed, the EFK stack collects logs for every
project within your {product-title} cluster. But the stack requires a dedicated project, by default *openshift-logging*.
The Ansible playbook creates the project for you. You only need to create a project if you want
to specify a node-selector on it.
+
----
$ oc adm new-project logging --node-selector=""
$ oc project logging
----
+
[NOTE]
====
Specifying an empty node selector on the project is recommended, as Fluentd should be deployed
throughout the cluster and any selector would restrict where it is
deployed. To control component placement, specify node selectors per component to
be applied to their deployment configurations.
====
7 changes: 5 additions & 2 deletions modules/efk-logging-deploy-variables.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -344,7 +344,7 @@ server cert. The default is the internal CA.
|The location of the client key Fluentd uses for `openshift_logging_es_host`.

|`openshift_logging_es_cluster_size`
|Elasticsearch replicas to deploy. Redundancy requires at least three or more.
|Elasticsearch nodes to deploy. Redundancy requires at least three or more.

|`openshift_logging_es_cpu_limit`
|The amount of CPU limit for the ES cluster.
Expand Down Expand Up @@ -377,7 +377,10 @@ openshift_logging_es_pvc_dynamic value.
|`openshift_logging_es_pvc_size`
|Size of the persistent volume claim to
create per Elasticsearch instance. For example, 100G. If omitted, no PVCs are
created and ephemeral volumes are used instead. If this parameter is set, `openshift_logging_elasticsearch_storage_type` is set to `pvc`.
created and ephemeral volumes are used instead. If you set this parameter, the logging installer sets `openshift_logging_elasticsearch_storage_type` to `pvc`.

|`openshift_logging_elasticsearch_storage_type`
|Sets the Elasticsearch storage type. If you are using Persistent Elasticsearch Storage, the logging installer sets this to `pvc`.

|`openshift_logging_elasticsearch_storage_type`
|Sets the Elasticsearch storage type. If you are using Persistent Elasticsearch Storage, set to `pvc`.
Expand Down
67 changes: 67 additions & 0 deletions modules/efk-logging-elasticsearch-persistent-storage-about.adoc
Original file line number Diff line number Diff line change
@@ -0,0 +1,67 @@
// Module included in the following assemblies:
//
// * logging/efk-logging-elasticsearch.adoc

[id='efk-logging-elasticsearch-persistent-storage-about_{context}']
= Configuring persistent storage for Elasticsearch

By default, the `openshift_logging` Ansible role creates an ephemeral
deployment in which all of a pod's data is lost upon restart.

For production environments, each Elasticsearch deployment configuration requires a persistent storage volume. You can specify an existing persistent
volume claim or allow {product-title} to create one.

* *Use existing PVCs.* If you create your own PVCs for the deployment, {product-title} uses those PVCs.
+
Name the PVCs to match the `openshift_logging_es_pvc_prefix` setting, which defaults to
`logging-es`. Assign each PVC a name with a sequence number added to it: `logging-es-0`,
`logging-es-1`, `logging-es-2`, and so on.

* *Allow {product-title} to create a PVC.* If a PVC for Elsaticsearch does not exist, {product-title} creates the PVC based on parameters
in the Ansible inventory file, by default *_/etc/ansible/hosts_*.
+
[cols="3,7",options="header"]
|===
|Parameter
|Description

|`openshift_logging_es_pvc_size`
| Specify the size of the PVC request.

|`openshift_logging_elasticsearch_storage_type`
a|Specify the storage type as `pvc`.
[NOTE]
====
This is an optional parameter. Setting the `openshift_logging_es_pvc_size` parameter to a value greater than 0 automatically sets this parameter to `pvc` by default.
====

|`openshift_logging_es_pvc_prefix`
|Optionally, specify a custom prefix for the PVC.
|===
+
For example:
+
[source,bash]
----
openshift_logging_elasticsearch_storage_type=pvc
openshift_logging_es_pvc_size=104802308Ki
openshift_logging_es_pvc_prefix=es-logging
----

If you use dynamically provisioned PVs, the {product-title} logging installer creates PVCs
that use the default storage class or the PVC specified with the `openshift_logging_elasticsearch_pvc_storage_class_name` parameter.

If you use NFS storage, the {product-title} installer creates the persistent volumes, based on the `openshift_logging_storage_*` parameters
and the {product-title} logging installer creates PVCs, using the `openshift_logging_es_pvc_*` paramters.
Make sure you specify the correct parameters to use persistent volumes with EFK.
Also set the `openshift_enable_unsupported_configurations=true` parameter in the Ansible inventory file,
as the logging installer blocks the installation of NFS with core infrastructure by default.

[WARNING]
====
Using NFS storage as a volume or a persistent volume (or via NAS such as
Gluster) is not supported for Elasticsearch storage, as Lucene relies on file
system behavior that NFS does not supply. Data corruption and other problems can
occur.
====

91 changes: 91 additions & 0 deletions modules/efk-logging-elasticsearch-persistent-storage-local.adoc
Original file line number Diff line number Diff line change
@@ -0,0 +1,91 @@
// Module included in the following assemblies:
//
// * logging/efk-logging-elasticsearch.adoc

[id='efk-logging-elasticsearch-persistent-storage-local_{context}']
= Configuring NFS as local storage for Elasticsearch


You can allocate a large file on an NFS server and mount the file to the nodes. You can then use the file as a host path device.

.Prerequisites

Allocate a large file on an NFS server and mount the file to the nodes

----
$ mount -F nfs nfserver:/nfs/storage/elasticsearch-1 /usr/local/es-storage
$ chown 1000:1000 /usr/local/es-storage
----

Then, use *_/usr/local/es-storage_* as a host-mount as described below.
Use a different backing file as storage for each Elasticsearch replica.

This loopback must be maintained manually outside of {product-title}, on the
node. You must not maintain it from inside a container.

.Procedure

To use a local disk volume (if available) on each
node host as storage for an Elasticsearch replica:

. The relevant service account must be given the privilege to mount and edit a
local volume:
+
----
$ oc adm policy add-scc-to-user privileged \
system:serviceaccount:logging:aggregated-logging-elasticsearch <1>
----
<1> Use the project you created earlier, for example, *logging*, when running the
logging playbook.

. Each Elasticsearch node definition must be patched to claim that privilege,
for example:
+
----
$ for dc in $(oc get deploymentconfig --selector logging-infra=elasticsearch -o name); do
oc scale $dc --replicas=0
oc patch $dc \
-p '{"spec":{"template":{"spec":{"containers":[{"name":"elasticsearch","securityContext":{"privileged": true}}]}}}}'
done
----

. The Elasticsearch replicas must be located on the correct nodes to use the local
storage, and should not move around even if those nodes are taken down for a
period of time. This requires giving each Elasticsearch node a node selector
that is unique to a node where an administrator has allocated storage for it. To
configure a node selector, edit each Elasticsearch deployment configuration and
add or edit the *nodeSelector* section to specify a unique label that you have
applied for each desired node:
+
----
apiVersion: v1
kind: DeploymentConfig
spec:
template:
spec:
nodeSelector:
logging-es-node: "1" <1>
----
<1> This label should uniquely identify a replica with a single node that bears that
label, in this case `logging-es-node=1`. Use the `oc label` command to apply
labels to nodes as needed.
+
To automate applying the node selector you can instead use the `oc patch` command:
+
----
$ oc patch dc/logging-es-<suffix> \
-p '{"spec":{"template":{"spec":{"nodeSelector":{"logging-es-node":"1"}}}}}'
----

. Apply a local host mount to each replica. The following example assumes storage is mounted at the same path on each node:
+
----
$ for dc in $(oc get deploymentconfig --selector logging-infra=elasticsearch -o name); do
oc set volume $dc \
--add --overwrite --name=elasticsearch-storage \
--type=hostPath --path=/usr/local/es-storage
oc rollout latest $dc
oc scale $dc --replicas=1
done
----

Original file line number Diff line number Diff line change
@@ -0,0 +1,78 @@
// Module included in the following assemblies:
//
// * logging/efk-logging-elasticsearch.adoc

[id='efk-logging-elasticsearch-persistent-storage-persistent_{context}']
= Using NFS as a persistent volume for Elasticsearch

You can deploy NFS as an automatically provisioned persistent volume or using a predefined NFS volume.

For more information, see _Sharing an NFS mount across two persistent volume claims_ to leverage shared storage for use by two separate containers.


*Using automatically provisioned NFS*

You can use NFS as a persistent volume where NFS is automatically provisioned.

.Procedure

. Add the following lines to the Ansible inventory file to create an NFS auto-provisioned storage class and dynamically provision the backing storage:
+
----
openshift_logging_es_pvc_storage_class_name=$nfsclass
openshift_logging_es_pvc_dynamic=true
----

. Use the following command to deploy the NFS volume using the logging playbook:
+
----
ansible-playbook /usr/share/ansible/openshift-ansible/playbooks/openshift-logging/config.yml
----

. Use the following steps to create a PVC:

.. Edit the Ansible inventory file to set the PVC size:
+
----
openshift_logging_es_pvc_size=50Gi
----
+
[NOTE]
====
The logging playbook selects a volume based on size and might use an unexpected volume if any other persistent volume has same size.
====

.. Use the following command to rerun the Ansible *_deploy_cluster.yml_* playbook:
+
----
ansible-playbook /usr/share/ansible/openshift-ansible/playbooks/deploy_cluster.yml
----
+
The installer playbook creates the NFS volume based on the `openshift_logging_storage` variables.

*Using a predefined NFS volume*

You can deploy logging alongside the {product-title} cluster using an existing NFS volume.

.Procedure

. Edit the Ansible inventory file to configure the NFS volume and set the PVC size:
+
----
openshift_logging_storage_kind=nfs
openshift_enable_unsupported_configurations=true
openshift_logging_storage_access_modes=["ReadWriteOnce"]
openshift_logging_storage_nfs_directory=/srv/nfs
openshift_logging_storage_nfs_options=*(rw,root_squash)
openshift_logging_storage_volume_name=logging
openshift_logging_storage_volume_size=100Gi
openshift_logging_storage_labels={:storage=>"logging"}
openshift_logging_install_logging=true
----

. Use the following command to redeploy the EFK stack:
+
----
ansible-playbook /usr/share/ansible/openshift-ansible/playbooks/deploy_cluster.yml
----

Loading

0 comments on commit ee93b11

Please sign in to comment.