Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

fixing grammar future tense to present tense, removing pronouns #8644

Closed
wants to merge 4 commits into from
Closed
Changes from 1 commit
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
Prev Previous commit
adding comment changes from kalexand-rh
  • Loading branch information
rlopez133 committed Jul 19, 2018
commit 66408c4be83f10161ef98acaa80d045abc48668d
127 changes: 64 additions & 63 deletions admin_guide/diagnostics_tool.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -33,44 +33,43 @@ connected to.

{product-title} may be deployed in numerous scenarios including:
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

You can deploy {product-title} by using several methods:

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Do we want to start sentences with You?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes. Our style guide says that docs are supposed to favor active, user-focused sentences. It's always better to say "you," meaning the user, "do <the thing>" instead of "<The thing> can be done."


* built from source
* included within a VM image
* as a container image
* via enterprise RPMs
* Built from source
* Included within a VM image
* As a container image
* Via enterprise RPMs

Each method implies a different configuration and environment. The diagnostics
were included within `openshift` binary to minimize environment assumptions and
Each method is suited for a different configuration and environment. The diagnostics
is included within `openshift` binary to minimize environment assumptions and
provide the ability to run the diagnostics tool within an {product-title}
server or client.

To use the diagnostics tool, preferably on a master host and as cluster
administrator, run a `sudo` user:
administrator, run:

----
# oc adm diagnostics
----

The above command runs all available diagnostis skipping any that do not apply
The previous command runs all available diagnostis and skips any that do not apply
to the environment.

The diagnostics tool has the ability to run one or multiple specific diagnostics
via name or as an enabler to address issues within the {product-title} environment. For
example:
To investigate issues within your {product-title} environment, you can run one
or more diagnostic tests by name. For example:

----
$ sudo oc adm diagnostics <name1> <name2>
$ oc adm diagnostics <name1> <name2>
----

The options provided by the diagnostics tool require working configuration
files. For example, the *NodeConfigCheck* does not run unless a node
configuration is readily available.

Diagnostics verifies that the configuration files reside in their standard
locations unless specified with flags (respectively,
`--config`, `--master-config`, and `--node-config`)
The diagnostics use the standard configuration file locations unless you
specify a different location by using the
`--config`, `--master-config`, and `--node-config` options.

The standard locations are listed below:
=======

You can run a specific diagnostics by name or run specific
diagnostics by name as you work to address issues. For example:

Expand All @@ -95,7 +94,7 @@ standard locations:

You can specify non-standard locations with the `--config`, `--master-config`,
and `--node-config` options. If a configuration file is not specified,
related diagnostics are skipped.
related diagnostics do not run.

Available diagnostics include:

Expand Down Expand Up @@ -173,13 +172,13 @@ nodes within {product-title} cluster due to:
location.
* Systemd units are configured to manage the server(s).
* Both master and node configuration files are in standard locations.
* Systemd units are created and configured for managing the nodes in a cluster
* Systemd units are created and configured for managing the nodes in a cluster.
* All components log to journald.

Standard location of the configuration files placed by an Ansible-deployed
The standard location of the configuration files placed by an Ansible-deployed
cluster ensures that running `oc adm diagnostics` works without any flags.
In the event, the standard location of the configuration files is not used,
options flags as those listed in the example below may be used.
In the event that the standard location of the configuration files is not used,
options flags as those listed in the following example may be used.

----
$ oc adm diagnostics --master-config=<file_path> --node-config=<file_path>
Expand All @@ -194,16 +193,16 @@ run.
[[admin-guide-diagnostics-tool-client-environment]]
== Running Diagnostics in a Client Environment

The diagnostics runs using as much access as the existing user running the
diagnostic has available. The diagnostic may run as an ordinary user, a
*cluster-admin* user or *cluster-admin* user.
The diagnostics tool runs using the level of permissions granted to the
account from which you run it. The diagnostics tool may run as an ordinary user, a
_cluster-admin_ user or _cluster-admin_ user.

A client with ordinary access should be able to diagnose its connection
A client with ordinary access can diagnose its connection
to the master and run a diagnostic pod. If multiple users or masters are
configured, connections are tested for all, but the diagnostic pod
configured, connections are_tested for all, but the diagnostic pod
only runs against the current user, server, or project.

A client with *cluster-admin* access available (for any user, but only the
A client with _cluster-admin_ access available (for any user, but only the
current master) can diagnose the status of infrastructure such as nodes,
registry, and router. In each case, running `oc adm diagnostics` searches for
the standard client configuration file in its standard location and uses it if
Expand Down Expand Up @@ -245,14 +244,16 @@ using the provided *_health.yml_* playbook.

[WARNING]
====
Due to potential changes the health check playbooks could make to the
environment, the playbooks should only be run against clusters that have been
Due to potential changes, the health check playbooks can make to the
environment, the run the playbooks against only clusters that were
deployed using Ansible with the same inventory file used during deployment. The
changes consist of installing dependencies in order to gather required
information. In some circumstances, additional system components (i.e. `docker`
or networking configurations) may be altered if their current state differs
from the configuration in the inventory file. These health checks should *only*
be run if the administrator does not expect the inventory file to make any
information. In some circumstances, additional system components, such as `docker`
or networking configurations are altered if their current state differs
from the configuration in the inventory file.

Run these health checks only if
the administrator does not expect the inventory file to make any
changes to the existing cluster configuration.
====

Expand All @@ -266,12 +267,12 @@ changes to the existing cluster configuration.
|`etcd_imagedata_size`
|This check measures the total size of {product-title} image data in an etcd
cluster. The check fails if the calculated size exceeds a user-defined limit. If
no limit is specified, this check fails if the size of image data amounts to
you do not specify a limit, this check fails if the size of image data amounts to
50% or more of the currently used space in the etcd cluster.

A failure from this check indicates that a significant amount of space in etcd
is being taken up by {product-title} image data, which can eventually result in
the etcd cluster crashing.
A failure from this check indicates that
{product-title} image data takes up a significant amount of space in etcd,
which can eventually result in the etcd cluster crashing.

A user-defined limit may be set by passing the `etcd_max_image_data_size_bytes`
variable. For example, setting `etcd_max_image_data_size_bytes=40000000000`
Expand Down Expand Up @@ -300,20 +301,19 @@ installations). Checks that *docker*'s total usage does not exceed a
user-defined limit. If no user-defined limit is set, *docker*'s maximum usage
threshold defaults to 90% of the total size available.

The threshold limit for total percent usage can be set with a variable in the
inventory file, for example `max_thinpool_data_usage_percent=90`.
You can set the threshold limit for total percentage usage with a variable in
the inventory file, for example `max_thinpool_data_usage_percent=90`.

This also checks that *docker*'s storage is using a
xref:../scaling_performance/optimizing_storage.adoc#choosing-a-graph-driver[supported configuration].

|`curator`, `elasticsearch`, `fluentd`, `kibana`
|This set of checks verifies that Curator, Kibana, Elasticsearch, and Fluentd
pods have been deployed and are in a `running` state, and that a connection can
pods for xref:../install_config/aggregate_logging.adoc#install-config-aggregate-logging[cluster logging]
deployed and are in a `running` state, and that a connection can
be established between the control host and the exposed Kibana URL. These checks
run only if the `openshift_logging_install_logging` inventory variable is set to
`true` to ensure that they are executed in a deployment where
xref:../install_config/aggregate_logging.adoc#install-config-aggregate-logging[cluster
logging] is enabled.
`true`.

|`logging_index_time`
|This check detects higher than normal time delays between log creation and log
Expand Down Expand Up @@ -341,8 +341,8 @@ xref:../install_config/redeploying_certificates.adoc#install-config-redeploying-
[[admin-guide-health-checks-via-ansible-playbook]]
=== Running Health Checks via ansible-playbook

The *openshift-ansible* health checks are executed using the `ansible-playbook`
command and requires specifying the cluster's inventory file and the *_health.yml_*
The *openshift-ansible* health checks are run by using the `ansible-playbook`
command. You must also specify the cluster's inventory file and the *_health.yml_*
playbook:

----
Expand All @@ -355,7 +355,7 @@ ifdef::openshift-origin[]
endif::[]
----

In order to set variables in the command line, include the `-e` flag with any desired
To set variables in the command line, include the `-e` flag with any desired
variables in `key=value` format. For example:

----
Expand All @@ -371,21 +371,21 @@ endif::[]
----

To disable specific checks, include the variable `openshift_disable_check` with
a comma-delimited list of check names within the inventory file before running the
a comma-delimited list of check names in the inventory file before you run the
playbook. For example:

----
openshift_disable_check=etcd_traffic,etcd_volume
----

Alternatively, set any checks to disable as variables with `-e
openshift_disable_check=<check1>,<check2>` when running the `ansible-playbook`
Alternatively, set any checks to disable as variables by using the `-e
openshift_disable_check=<check1>,<check2>` option when you run the `ansible-playbook`
command.

[[admin-guide-health-checks-via-docker-cli]]
=== Running Health Checks via Docker CLI

The *openshift-ansible* playbooks may run in a Docker container avoiding the
You can run the *openshift-ansible* playbooks in a Docker container to avoid the
requirement for installing and configuring Ansible, on any host that can
run the
ifdef::openshift-enterprise[]
Expand All @@ -396,8 +396,8 @@ ifdef::openshift-origin[]
endif::[]
image via the Docker CLI.

As a non-root user that has privileges to run containers specify the cluster's
inventory file and run the *_health.yml_* playbook:
Run the following command as a non-root user that has privileges to run
containers:

----
# docker run -u `id -u` \ <1>
Expand Down Expand Up @@ -426,23 +426,24 @@ used according to the `INVENTORY_FILE` environment variable in the container.
inside the container.
<5> Set any variables desired for a single run with the `-e key=value` format.

In the above command, the SSH key is mounted with the `:Z` flag so that the
container can read the SSH key from its restricted SELinux context. This ensures
the original SSH key file is relabeled similarly to
In the previous command, the SSH key is mounted with the `:Z` flag so that the
container can read the SSH key from its restricted SELinux context. Adding this
flag ensures the original SSH key file is relabeled similarly to
`system_u:object_r:container_file_t:s0:c113,c247`. For more details about `:Z`,
see the `docker-run(1)` man page.

[IMPORTANT]
====
These volume mount specifications can have unexpected consequences. For example,
if you mount, and therefore relabel, the *_$HOME/.ssh_* directory, *sshd*
becomes unable to access the public keys to allow remote login. To avoid
you mount, and therefore relabel, the *_$HOME/.ssh_* directory, *sshd*
is unable to access the public keys to allow remote login. To avoid
altering the original file labels, mount a copy of the SSH key or directory.
====

You might mount an entire *_.ssh_* directory for various reasons. For example,
this would allow you to use an SSH configuration to match keys with hosts or
modify other connection parameters. It could also allow a user to provide a
*_known_hosts_* file and have SSH validate host keys, which is disabled by the
default configuration and can be re-enabled with an environment variable by
adding `-e ANSIBLE_HOST_KEY_CHECKING=True` to the `docker` command line.
If you mount an entire .ssh directory:
* You can use an SSH configuration to match keys with hosts or modify other connection parameters.
* You can provide a known_hosts file and use SSH to validate host keys. To use this feature, which
is disabled by default, use the `docker` command line to add the `-e
ANSIBLE_HOST_KEY_CHECKING=True` environment variable to the container.