You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
You can access the diagnostics tool as an ordinary user, as a *cluster-admin*
179
-
user, and can run on a host where {product-title} master or node servers are
180
-
operating. The diagnostics attempt to use as much access as the user has
181
-
available.
176
+
You can run the diagnostics tool as an ordinary user or a `cluster-admin`, and
177
+
it runs using the level of permissions granted to the account from which you
178
+
run it.
182
179
183
-
A client with ordinary access should be able to diagnose its connection
184
-
to the master and run a diagnostic pod. If multiple users or masters are
185
-
configured, connections are tested for all, but the diagnostic pod
186
-
only runs against the current user, server, or project.
180
+
A client with ordinary access can diagnose its connection to the master and run
181
+
a diagnostic pod. If multiple users or masters are configured, connections are
182
+
tested for all, but the diagnostic pod only runs against the current user,
183
+
server, or project.
187
184
188
-
A client with *cluster-admin* access available (for any user, but only the
189
-
current master) can diagnose the status of infrastructure such as nodes,
190
-
registry, and router. In each case, running `oc adm diagnostics` searches for
191
-
the standard client configuration file in its standard location and uses it if
192
-
available.
185
+
A client with `cluster-admin` access can diagnose the status of infrastructure
186
+
such as nodes, registry, and router. In each case, running `oc adm diagnostics`
187
+
searches for the standard client configuration file in its standard location and
188
+
uses it if available.
193
189
194
190
[[ansible-based-tooling-health-checks]]
195
191
== Ansible-based Health Checks
@@ -228,14 +224,14 @@ using the provided *_health.yml_* playbook.
228
224
[WARNING]
229
225
====
230
226
Due to potential changes the health check playbooks can make to the environment,
231
-
you must run the playbooks against only clusters that were deployed using
232
-
Ansible and using the same inventory file that used during deployment. The
233
-
changes consist of installing dependencies so that the checks can gather the
234
-
required information. In some circumstances, additional system components, such
235
-
as `docker` or networking configurations, can be altered if their current state
236
-
differs from the configuration in the inventory file. You should run these
237
-
health checks only if you do not expect the inventory file to make any changes
238
-
to the existing cluster configuration.
227
+
you must run the playbooks against only Ansible-deployed clusters and using the
228
+
same inventory file used for deployment. The changes consist of installing
229
+
dependencies so that the checks can gather the required information. In some
230
+
circumstances, additional system components, such as `docker` or networking
231
+
configurations, can change if their current state differs from the configuration
232
+
in the inventory file. You should run these health checks only if you do not
233
+
expect the inventory file to make any changes to the existing cluster
234
+
configuration.
239
235
====
240
236
241
237
[[admin-guide-diagnostics-tool-ansible-checks]]
@@ -282,7 +278,7 @@ installations). Checks that *docker*'s total usage does not exceed a
282
278
user-defined limit. If no user-defined limit is set, *docker*'s maximum usage
283
279
threshold defaults to 90% of the total size available.
284
280
285
-
The threshold limit for total percent usage can be set with a variable in the
281
+
You can set the threshold limit for total percent usage with a variable in the
286
282
inventory file, for example `max_thinpool_data_usage_percent=90`.
287
283
288
284
This also checks that *docker*'s storage is using a
@@ -378,8 +374,7 @@ ifdef::openshift-origin[]
378
374
endif::[]
379
375
image via the Docker CLI.
380
376
381
-
As a non-root user that has privileges to run containers specify the cluster's
382
-
inventory file and the *_health.yml_* playbook:
377
+
Run the following as a non-root user that has privileges to run containers:
383
378
384
379
----
385
380
# docker run -u `id -u` \ <1>
@@ -408,9 +403,9 @@ used according to the `INVENTORY_FILE` environment variable in the container.
408
403
inside the container.
409
404
<5> Set any variables desired for a single run with the `-e key=value` format.
410
405
411
-
In the above command, the SSH key is mounted with the `:Z` flag so that the
412
-
container can read the SSH key from its restricted SELinux context; this means
413
-
that your original SSH key file will be relabeled to something like
406
+
In the previous command, the SSH key is mounted with the `:Z` option so that the
407
+
container can read the SSH key from its restricted SELinux context. Adding this
408
+
option means that your original SSH key file is relabeled similarly to
414
409
`system_u:object_r:container_file_t:s0:c113,c247`. For more details about `:Z`,
415
410
see the `docker-run(1)` man page.
416
411
@@ -422,9 +417,9 @@ becomes unable to access the public keys to allow remote login. To avoid
422
417
altering the original file labels, mount a copy of the SSH key or directory.
423
418
====
424
419
425
-
You might mount an entire *_.ssh_* directory for various reasons. For example,
426
-
this would allow you to use an SSH configuration to match keys with hosts or
427
-
modify other connection parameters. It could also allow a user to provide a
428
-
*_known_hosts_* file and have SSH validate host keys, which is disabled by the
429
-
default configuration and can be re-enabled with an environment variable by
430
-
adding `-e ANSIBLE_HOST_KEY_CHECKING=True` to the `docker` command line.
420
+
Mounting an entire *_.ssh_* directory can be helpful for:
421
+
422
+
* Allowing you to use an SSH configuration to match keys with hosts or
423
+
modify other connection parameters.
424
+
* Allowing a user to provide a *_known_hosts_* file and have SSH validate host keys. This is disabled by the default configuration and can be re-enabled with an environment variable by adding `-e ANSIBLE_HOST_KEY_CHECKING=True` to the `docker` command line.
0 commit comments