forked from openshift/openshift-docs
-
Notifications
You must be signed in to change notification settings - Fork 0
Commit
This commit does not belong to any branch on this repository, and may belong to a fork outside of the repository.
Merge pull request openshift#13315 from mburke5678/logging-move-311-c…
…hanges-to-40 Adding changes to 3.11 docs to 4.0
- Loading branch information
Showing
16 changed files
with
393 additions
and
168 deletions.
There are no files selected for viewing
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
67 changes: 67 additions & 0 deletions
67
modules/efk-logging-elasticsearch-persistent-storage-about.adoc
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,67 @@ | ||
// Module included in the following assemblies: | ||
// | ||
// * logging/efk-logging-elasticsearch.adoc | ||
|
||
[id='efk-logging-elasticsearch-persistent-storage-about_{context}'] | ||
= Configuring persistent storage for Elasticsearch | ||
|
||
By default, the `openshift_logging` Ansible role creates an ephemeral | ||
deployment in which all of a pod's data is lost upon restart. | ||
|
||
For production environments, each Elasticsearch deployment configuration requires a persistent storage volume. You can specify an existing persistent | ||
volume claim or allow {product-title} to create one. | ||
|
||
* *Use existing PVCs.* If you create your own PVCs for the deployment, {product-title} uses those PVCs. | ||
+ | ||
Name the PVCs to match the `openshift_logging_es_pvc_prefix` setting, which defaults to | ||
`logging-es`. Assign each PVC a name with a sequence number added to it: `logging-es-0`, | ||
`logging-es-1`, `logging-es-2`, and so on. | ||
|
||
* *Allow {product-title} to create a PVC.* If a PVC for Elsaticsearch does not exist, {product-title} creates the PVC based on parameters | ||
in the Ansible inventory file, by default *_/etc/ansible/hosts_*. | ||
+ | ||
[cols="3,7",options="header"] | ||
|=== | ||
|Parameter | ||
|Description | ||
|
||
|`openshift_logging_es_pvc_size` | ||
| Specify the size of the PVC request. | ||
|
||
|`openshift_logging_elasticsearch_storage_type` | ||
a|Specify the storage type as `pvc`. | ||
[NOTE] | ||
==== | ||
This is an optional parameter. Setting the `openshift_logging_es_pvc_size` parameter to a value greater than 0 automatically sets this parameter to `pvc` by default. | ||
==== | ||
|
||
|`openshift_logging_es_pvc_prefix` | ||
|Optionally, specify a custom prefix for the PVC. | ||
|=== | ||
+ | ||
For example: | ||
+ | ||
[source,bash] | ||
---- | ||
openshift_logging_elasticsearch_storage_type=pvc | ||
openshift_logging_es_pvc_size=104802308Ki | ||
openshift_logging_es_pvc_prefix=es-logging | ||
---- | ||
|
||
If you use dynamically provisioned PVs, the {product-title} logging installer creates PVCs | ||
that use the default storage class or the PVC specified with the `openshift_logging_elasticsearch_pvc_storage_class_name` parameter. | ||
|
||
If you use NFS storage, the {product-title} installer creates the persistent volumes, based on the `openshift_logging_storage_*` parameters | ||
and the {product-title} logging installer creates PVCs, using the `openshift_logging_es_pvc_*` paramters. | ||
Make sure you specify the correct parameters to use persistent volumes with EFK. | ||
Also set the `openshift_enable_unsupported_configurations=true` parameter in the Ansible inventory file, | ||
as the logging installer blocks the installation of NFS with core infrastructure by default. | ||
|
||
[WARNING] | ||
==== | ||
Using NFS storage as a volume or a persistent volume (or via NAS such as | ||
Gluster) is not supported for Elasticsearch storage, as Lucene relies on file | ||
system behavior that NFS does not supply. Data corruption and other problems can | ||
occur. | ||
==== | ||
|
91 changes: 91 additions & 0 deletions
91
modules/efk-logging-elasticsearch-persistent-storage-local.adoc
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,91 @@ | ||
// Module included in the following assemblies: | ||
// | ||
// * logging/efk-logging-elasticsearch.adoc | ||
|
||
[id='efk-logging-elasticsearch-persistent-storage-local_{context}'] | ||
= Configuring NFS as local storage for Elasticsearch | ||
|
||
|
||
You can allocate a large file on an NFS server and mount the file to the nodes. You can then use the file as a host path device. | ||
|
||
.Prerequisites | ||
|
||
Allocate a large file on an NFS server and mount the file to the nodes | ||
|
||
---- | ||
$ mount -F nfs nfserver:/nfs/storage/elasticsearch-1 /usr/local/es-storage | ||
$ chown 1000:1000 /usr/local/es-storage | ||
---- | ||
|
||
Then, use *_/usr/local/es-storage_* as a host-mount as described below. | ||
Use a different backing file as storage for each Elasticsearch replica. | ||
|
||
This loopback must be maintained manually outside of {product-title}, on the | ||
node. You must not maintain it from inside a container. | ||
|
||
.Procedure | ||
|
||
To use a local disk volume (if available) on each | ||
node host as storage for an Elasticsearch replica: | ||
|
||
. The relevant service account must be given the privilege to mount and edit a | ||
local volume: | ||
+ | ||
---- | ||
$ oc adm policy add-scc-to-user privileged \ | ||
system:serviceaccount:logging:aggregated-logging-elasticsearch <1> | ||
---- | ||
<1> Use the project you created earlier, for example, *logging*, when running the | ||
logging playbook. | ||
|
||
. Each Elasticsearch node definition must be patched to claim that privilege, | ||
for example: | ||
+ | ||
---- | ||
$ for dc in $(oc get deploymentconfig --selector logging-infra=elasticsearch -o name); do | ||
oc scale $dc --replicas=0 | ||
oc patch $dc \ | ||
-p '{"spec":{"template":{"spec":{"containers":[{"name":"elasticsearch","securityContext":{"privileged": true}}]}}}}' | ||
done | ||
---- | ||
|
||
. The Elasticsearch replicas must be located on the correct nodes to use the local | ||
storage, and should not move around even if those nodes are taken down for a | ||
period of time. This requires giving each Elasticsearch node a node selector | ||
that is unique to a node where an administrator has allocated storage for it. To | ||
configure a node selector, edit each Elasticsearch deployment configuration and | ||
add or edit the *nodeSelector* section to specify a unique label that you have | ||
applied for each desired node: | ||
+ | ||
---- | ||
apiVersion: v1 | ||
kind: DeploymentConfig | ||
spec: | ||
template: | ||
spec: | ||
nodeSelector: | ||
logging-es-node: "1" <1> | ||
---- | ||
<1> This label should uniquely identify a replica with a single node that bears that | ||
label, in this case `logging-es-node=1`. Use the `oc label` command to apply | ||
labels to nodes as needed. | ||
+ | ||
To automate applying the node selector you can instead use the `oc patch` command: | ||
+ | ||
---- | ||
$ oc patch dc/logging-es-<suffix> \ | ||
-p '{"spec":{"template":{"spec":{"nodeSelector":{"logging-es-node":"1"}}}}}' | ||
---- | ||
|
||
. Apply a local host mount to each replica. The following example assumes storage is mounted at the same path on each node: | ||
+ | ||
---- | ||
$ for dc in $(oc get deploymentconfig --selector logging-infra=elasticsearch -o name); do | ||
oc set volume $dc \ | ||
--add --overwrite --name=elasticsearch-storage \ | ||
--type=hostPath --path=/usr/local/es-storage | ||
oc rollout latest $dc | ||
oc scale $dc --replicas=1 | ||
done | ||
---- | ||
|
78 changes: 78 additions & 0 deletions
78
modules/efk-logging-elasticsearch-persistent-storage-persistent.adoc
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,78 @@ | ||
// Module included in the following assemblies: | ||
// | ||
// * logging/efk-logging-elasticsearch.adoc | ||
|
||
[id='efk-logging-elasticsearch-persistent-storage-persistent_{context}'] | ||
= Using NFS as a persistent volume for Elasticsearch | ||
|
||
You can deploy NFS as an automatically provisioned persistent volume or using a predefined NFS volume. | ||
|
||
For more information, see _Sharing an NFS mount across two persistent volume claims_ to leverage shared storage for use by two separate containers. | ||
|
||
|
||
*Using automatically provisioned NFS* | ||
|
||
You can use NFS as a persistent volume where NFS is automatically provisioned. | ||
|
||
.Procedure | ||
|
||
. Add the following lines to the Ansible inventory file to create an NFS auto-provisioned storage class and dynamically provision the backing storage: | ||
+ | ||
---- | ||
openshift_logging_es_pvc_storage_class_name=$nfsclass | ||
openshift_logging_es_pvc_dynamic=true | ||
---- | ||
|
||
. Use the following command to deploy the NFS volume using the logging playbook: | ||
+ | ||
---- | ||
ansible-playbook /usr/share/ansible/openshift-ansible/playbooks/openshift-logging/config.yml | ||
---- | ||
|
||
. Use the following steps to create a PVC: | ||
|
||
.. Edit the Ansible inventory file to set the PVC size: | ||
+ | ||
---- | ||
openshift_logging_es_pvc_size=50Gi | ||
---- | ||
+ | ||
[NOTE] | ||
==== | ||
The logging playbook selects a volume based on size and might use an unexpected volume if any other persistent volume has same size. | ||
==== | ||
|
||
.. Use the following command to rerun the Ansible *_deploy_cluster.yml_* playbook: | ||
+ | ||
---- | ||
ansible-playbook /usr/share/ansible/openshift-ansible/playbooks/deploy_cluster.yml | ||
---- | ||
+ | ||
The installer playbook creates the NFS volume based on the `openshift_logging_storage` variables. | ||
|
||
*Using a predefined NFS volume* | ||
|
||
You can deploy logging alongside the {product-title} cluster using an existing NFS volume. | ||
|
||
.Procedure | ||
|
||
. Edit the Ansible inventory file to configure the NFS volume and set the PVC size: | ||
+ | ||
---- | ||
openshift_logging_storage_kind=nfs | ||
openshift_enable_unsupported_configurations=true | ||
openshift_logging_storage_access_modes=["ReadWriteOnce"] | ||
openshift_logging_storage_nfs_directory=/srv/nfs | ||
openshift_logging_storage_nfs_options=*(rw,root_squash) | ||
openshift_logging_storage_volume_name=logging | ||
openshift_logging_storage_volume_size=100Gi | ||
openshift_logging_storage_labels={:storage=>"logging"} | ||
openshift_logging_install_logging=true | ||
---- | ||
|
||
. Use the following command to redeploy the EFK stack: | ||
+ | ||
---- | ||
ansible-playbook /usr/share/ansible/openshift-ansible/playbooks/deploy_cluster.yml | ||
---- | ||
|
Oops, something went wrong.