Skip to content

Commit a35ce7b

Browse files
author
Brice Fallon-Freeman
authored
Merge pull request #10983 from bfallonf/diag_edits
Edits after #8644
2 parents 9774cc7 + 690e309 commit a35ce7b

File tree

1 file changed

+41
-46
lines changed

1 file changed

+41
-46
lines changed

admin_guide/diagnostics_tool.adoc

Lines changed: 41 additions & 46 deletions
Original file line numberDiff line numberDiff line change
@@ -31,17 +31,16 @@ connected to.
3131
[[admin-guide-using-the-diagnostics-tool]]
3232
== Using the Diagnostics Tool
3333

34-
{product-title} can be deployed in several ways. These include:
34+
You can deploy {product-title} in several ways. These include:
3535

3636
* Built from source
37-
* Included in a VM image
37+
* Included within a VM image
3838
* As a container image
39-
* As enterprise RPMs
39+
* Using enterprise RPMs
4040

41-
Each method implies a different configuration and environment. To minimize
42-
environment assumptions, diagnostics are included with the `openshift`
43-
binary to provide the ability to run the diagnostics tool within an
44-
{product-title} server or client.
41+
Each method is suited for a different configuration and environment. To minimize
42+
environment assumptions, the diagnostics tool is included with the `openshift`
43+
binary to provide diagnostics within an {product-title} server or client.
4544

4645
To use the diagnostics tool, preferably on a master host and as cluster
4746
administrator, run:
@@ -50,7 +49,7 @@ administrator, run:
5049
# oc adm diagnostics
5150
----
5251

53-
This runs all available diagnostics, skipping any that do not apply.
52+
This runs all available diagnostics and skips any that do not apply to the environment.
5453

5554
You can run a specific diagnostics by name or run specific
5655
diagnostics by name as you work to address issues. For example:
@@ -63,8 +62,7 @@ The options for the diagnostics tool require working configuration files. For
6362
example, the *NodeConfigCheck* does not run unless a node configuration is
6463
available.
6564

66-
The diagnostics tool verifies that the configuration files reside in their
67-
standard locations:
65+
The diagnostics tool uses the standard configuration file locations by default:
6866

6967
* Client:
7068
** As indicated by the `$KUBECONFIG` environment variable
@@ -147,14 +145,14 @@ If there are any errors, this diagnostic stores results and retrieved files in a
147145
[[admin-guide-diagnostics-tool-server-environment]]
148146
== Running Diagnostics in a Server Environment
149147

150-
Master and node diagnostics are most useful in an Ansible-deployed cluster. This
151-
provides some diagnostic benefits:
148+
An Ansible-deployed cluster provides additional diagnostic benefits for
149+
nodes within an {product-title} cluster. These include:
152150

153151
* Master and node configuration is based on a configuration file in a standard
154152
location.
155153
* Systemd units are configured to manage the server(s).
156154
* Both master and node configuration files are in standard locations.
157-
* Systemd units are created and configured for managing the nodes in a cluster
155+
* Systemd units are created and configured for managing the nodes in a cluster.
158156
* All components log to journald.
159157

160158
Keeping to the default location of the configuration files placed by an
@@ -175,21 +173,19 @@ run.
175173
[[admin-guide-diagnostics-tool-client-environment]]
176174
== Running Diagnostics in a Client Environment
177175

178-
You can access the diagnostics tool as an ordinary user, as a *cluster-admin*
179-
user, and can run on a host where {product-title} master or node servers are
180-
operating. The diagnostics attempt to use as much access as the user has
181-
available.
176+
You can run the diagnostics tool as an ordinary user or a `cluster-admin`, and
177+
it runs using the level of permissions granted to the account from which you
178+
run it.
182179

183-
A client with ordinary access should be able to diagnose its connection
184-
to the master and run a diagnostic pod. If multiple users or masters are
185-
configured, connections are tested for all, but the diagnostic pod
186-
only runs against the current user, server, or project.
180+
A client with ordinary access can diagnose its connection to the master and run
181+
a diagnostic pod. If multiple users or masters are configured, connections are
182+
tested for all, but the diagnostic pod only runs against the current user,
183+
server, or project.
187184

188-
A client with *cluster-admin* access available (for any user, but only the
189-
current master) can diagnose the status of infrastructure such as nodes,
190-
registry, and router. In each case, running `oc adm diagnostics` searches for
191-
the standard client configuration file in its standard location and uses it if
192-
available.
185+
A client with `cluster-admin` access can diagnose the status of infrastructure
186+
such as nodes, registry, and router. In each case, running `oc adm diagnostics`
187+
searches for the standard client configuration file in its standard location and
188+
uses it if available.
193189

194190
[[ansible-based-tooling-health-checks]]
195191
== Ansible-based Health Checks
@@ -228,14 +224,14 @@ using the provided *_health.yml_* playbook.
228224
[WARNING]
229225
====
230226
Due to potential changes the health check playbooks can make to the environment,
231-
you must run the playbooks against only clusters that were deployed using
232-
Ansible and using the same inventory file that used during deployment. The
233-
changes consist of installing dependencies so that the checks can gather the
234-
required information. In some circumstances, additional system components, such
235-
as `docker` or networking configurations, can be altered if their current state
236-
differs from the configuration in the inventory file. You should run these
237-
health checks only if you do not expect the inventory file to make any changes
238-
to the existing cluster configuration.
227+
you must run the playbooks against only Ansible-deployed clusters and using the
228+
same inventory file used for deployment. The changes consist of installing
229+
dependencies so that the checks can gather the required information. In some
230+
circumstances, additional system components, such as `docker` or networking
231+
configurations, can change if their current state differs from the configuration
232+
in the inventory file. You should run these health checks only if you do not
233+
expect the inventory file to make any changes to the existing cluster
234+
configuration.
239235
====
240236

241237
[[admin-guide-diagnostics-tool-ansible-checks]]
@@ -282,7 +278,7 @@ installations). Checks that *docker*'s total usage does not exceed a
282278
user-defined limit. If no user-defined limit is set, *docker*'s maximum usage
283279
threshold defaults to 90% of the total size available.
284280

285-
The threshold limit for total percent usage can be set with a variable in the
281+
You can set the threshold limit for total percent usage with a variable in the
286282
inventory file, for example `max_thinpool_data_usage_percent=90`.
287283

288284
This also checks that *docker*'s storage is using a
@@ -378,8 +374,7 @@ ifdef::openshift-origin[]
378374
endif::[]
379375
image via the Docker CLI.
380376

381-
As a non-root user that has privileges to run containers specify the cluster's
382-
inventory file and the *_health.yml_* playbook:
377+
Run the following as a non-root user that has privileges to run containers:
383378

384379
----
385380
# docker run -u `id -u` \ <1>
@@ -408,9 +403,9 @@ used according to the `INVENTORY_FILE` environment variable in the container.
408403
inside the container.
409404
<5> Set any variables desired for a single run with the `-e key=value` format.
410405

411-
In the above command, the SSH key is mounted with the `:Z` flag so that the
412-
container can read the SSH key from its restricted SELinux context; this means
413-
that your original SSH key file will be relabeled to something like
406+
In the previous command, the SSH key is mounted with the `:Z` option so that the
407+
container can read the SSH key from its restricted SELinux context. Adding this
408+
option means that your original SSH key file is relabeled similarly to
414409
`system_u:object_r:container_file_t:s0:c113,c247`. For more details about `:Z`,
415410
see the `docker-run(1)` man page.
416411

@@ -422,9 +417,9 @@ becomes unable to access the public keys to allow remote login. To avoid
422417
altering the original file labels, mount a copy of the SSH key or directory.
423418
====
424419

425-
You might mount an entire *_.ssh_* directory for various reasons. For example,
426-
this would allow you to use an SSH configuration to match keys with hosts or
427-
modify other connection parameters. It could also allow a user to provide a
428-
*_known_hosts_* file and have SSH validate host keys, which is disabled by the
429-
default configuration and can be re-enabled with an environment variable by
430-
adding `-e ANSIBLE_HOST_KEY_CHECKING=True` to the `docker` command line.
420+
Mounting an entire *_.ssh_* directory can be helpful for:
421+
422+
* Allowing you to use an SSH configuration to match keys with hosts or
423+
modify other connection parameters.
424+
* Allowing a user to provide a *_known_hosts_* file and have SSH validate host keys. This is disabled by the default configuration and can be re-enabled with an environment variable by adding `-e ANSIBLE_HOST_KEY_CHECKING=True` to the `docker` command line.
425+

0 commit comments

Comments
 (0)