Skip to content

Commit

Permalink
Code block style fixes
Browse files Browse the repository at this point in the history
  • Loading branch information
ahardin-rh authored and openshift-cherrypick-robot committed Aug 7, 2020
1 parent b36b4d2 commit a57cff9
Show file tree
Hide file tree
Showing 8 changed files with 54 additions and 13 deletions.
2 changes: 2 additions & 0 deletions modules/cluster-node-tuning-operator-verify-profiles.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -16,6 +16,7 @@ $ oc get pods -n openshift-cluster-node-tuning-operator -o wide
----
+
.Example output
[source,terminal]
----
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
cluster-node-tuning-operator-599489d4f7-k4hw4 1/1 Running 0 6d2h 10.129.0.76 ip-10-0-145-113.eu-west-3.compute.internal <none> <none>
Expand All @@ -34,6 +35,7 @@ $ for p in `oc get pods -n openshift-cluster-node-tuning-operator -l openshift-a
----
+
.Example output
[source,terminal]
----
*** tuned-2jkzp ***
2020-07-10 13:53:35,368 INFO tuned.daemon.daemon: static tuning from profile 'openshift-control-plane' applied
Expand Down
1 change: 1 addition & 0 deletions modules/configuring-cluster-monitoring.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -11,6 +11,7 @@ To increase the storage capacity for Prometheus:

. Create a YAML configuration file, `cluster-monitoring-config.yml`. For example:
+
[source,yaml]
----
apiVersion: v1
kind: ConfigMap
Expand Down
6 changes: 3 additions & 3 deletions modules/custom-tuning-example.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -10,10 +10,10 @@ The following CR applies custom node-level tuning for
`tuned.openshift.io/ingress-node-label` set to any value.
As an administrator, use the following command to create a custom Tuned CR.

.Example

.Custom tuning example
[source,terminal]
----
oc create -f- <<_EOF_
$ oc create -f- <<_EOF_
apiVersion: tuned.openshift.io/v1
kind: Tuned
metadata:
Expand Down
3 changes: 3 additions & 0 deletions modules/pod-interactions-with-topology-manager.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -10,6 +10,7 @@ The example Pod specs below help illustrate Pod interactions with Topology Manag
The following Pod runs in the `BestEffort` QoS class because no resource requests or
limits are specified.

[source,yaml]
----
spec:
containers:
Expand All @@ -19,6 +20,7 @@ spec:

The next Pod runs in the `Burstable` QoS class because requests are less than limits.

[source,yaml]
----
spec:
containers:
Expand All @@ -36,6 +38,7 @@ not consider either of these Pod specifications.

The last example Pod below runs in the Guaranteed QoS class because requests are equal to limits.

[source,yaml]
----
spec:
containers:
Expand Down
7 changes: 4 additions & 3 deletions modules/recommended-install-practices.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -9,6 +9,7 @@ When installing large clusters or scaling the cluster to larger node counts,
set the cluster network `cidr` accordingly in your `install-config.yaml`
file before you install the cluster:

[source,yaml]
----
networking:
clusterNetwork:
Expand All @@ -20,6 +21,6 @@ networking:
- 172.30.0.0/16
----

The default clusterNetwork cidr 10.128.0.0/14 cannot be used if the cluster size is more
than 500 nodes. It must be set to 10.128.0.0/12 or 10.128.0.0/10 to get to larger node
counts beyond 500 nodes.
The default cluster network `cidr` `10.128.0.0/14` cannot be used if the cluster
size is more than 500 nodes. It must be set to `10.128.0.0/12` or
`10.128.0.0/10` to get to larger node counts beyond 500 nodes.
45 changes: 39 additions & 6 deletions modules/setting-up-cpu-manager.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -22,6 +22,7 @@ In this example, all workers have CPU Manager enabled:

. Add a label to the worker `MachineConfigPool`:
+
[source,yaml]
----
metadata:
creationTimestamp: 2019-xx-xxx
Expand All @@ -35,6 +36,7 @@ Refer to the label created in the previous step to have the correct nodes
updated with the new `KubeletConfig`. See the `machineConfigPoolSelector`
section:
+
[source,yaml]
----
apiVersion: machineconfiguration.openshift.io/v1
kind: KubeletConfig
Expand Down Expand Up @@ -67,7 +69,11 @@ is not needed.
+
----
# oc get machineconfig 99-worker-XXXXXX-XXXXX-XXXX-XXXXX-kubelet -o json | grep ownerReference -A7
----
+
.Example output
[source,json]
----
"ownerReferences": [
{
"apiVersion": "machineconfiguration.openshift.io/v1",
Expand All @@ -83,6 +89,11 @@ is not needed.
----
# oc debug node/perf-node.example.com
sh-4.2# cat /host/etc/kubernetes/kubelet.conf | grep cpuManager
----
+
.Example output
[source,terminal]
----
cpuManagerPolicy: static <1>
cpuManagerReconcilePeriod: 5s <1>
----
Expand All @@ -94,6 +105,11 @@ that will be dedicated to this Pod:
+
----
# cat cpumanager-pod.yaml
----
+
.Example output
[source,yaml]
----
apiVersion: v1
kind: Pod
metadata:
Expand Down Expand Up @@ -123,6 +139,11 @@ spec:
+
----
# oc describe pod cpumanager
----
+
.Example output
[source,terminal]
----
Name: cpumanager-6cqz7
Namespace: default
Priority: 0
Expand Down Expand Up @@ -159,6 +180,11 @@ Pods of quality of service (QoS) tier `Guaranteed` are placed within the
----
# cd /sys/fs/cgroup/cpuset/kubepods.slice/kubepods-pod69c01f8e_6b74_11e9_ac0f_0a2b62178a22.slice/crio-b5437308f1ad1a7db0574c542bdf08563b865c0345c86e9585f8c0b0a655612c.scope
# for i in `ls cpuset.cpus tasks` ; do echo -n "$i "; cat $i ; done
----
+
.Example output
[source,terminal]
----
cpuset.cpus 1
tasks 32706
----
Expand All @@ -167,20 +193,26 @@ tasks 32706
+
----
# grep ^Cpus_allowed_list /proc/32706/status
----
+
.Example output
[source,terminal]
----
Cpus_allowed_list: 1
----

. Verify that another pod (in this case, the pod in the `burstable` QoS tier) on
the system cannot run on the core allocated for the `Guaranteed` pod:
. Verify that another Pod (in this case, the Pod in the `burstable` QoS tier) on
the system cannot run on the core allocated for the `Guaranteed` Pod:
+
----
# cat /sys/fs/cgroup/cpuset/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc494a073_6b77_11e9_98c0_06bba5c387ea.slice/crio-c56982f57b75a2420947f0afc6cafe7534c5734efc34157525fa9abbf99e3849.scope/cpuset.cpus
0
# oc describe node perf-node.example.com
----
+
.Example output
[source, terminal]
----
# oc describe node perf-node.example.com
...
Capacity:
attachable-volumes-aws-ebs: 39
Expand Down Expand Up @@ -213,8 +245,9 @@ of one core is subtracted from the total capacity of the node to arrive at the
`Node Allocatable` amount. You can see that `Allocatable CPU` is 1500 millicores.
This means you can run one of the CPU Manager pods since each will take one whole
core. A whole core is equivalent to 1000 millicores. If you try to schedule a
second pod, the system will accept the pod, but it will never be scheduled:
second Pod, the system will accept the Pod, but it will never be scheduled:
+
[source, terminal]
----
NAME READY STATUS RESTARTS AGE
cpumanager-6cqz7 1/1 Running 0 33m
Expand Down
1 change: 1 addition & 0 deletions scalability_and_performance/using-cluster-loader.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -26,6 +26,7 @@ include::modules/configuring-cluster-loader.adoc[leveloffset=+1]
creation fails with `error: unknown parameter name "IDENTIFIER"`. If you deploy
templates, add this parameter to your template to avoid this error:
+
[source,yaml]
----
{
"name": "IDENTIFIER",
Expand Down
2 changes: 1 addition & 1 deletion whats_new/new-features.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -129,7 +129,7 @@ handled via Amazon), Manila provisioner/operator, and Snapshot.
=== Cluster maximums

Updated guidance around
xref:../scalability_and_performance/planning-your-environment-according-to-object-maximums.adoc[Cluster
xref:../scalability_and_performance/planning-your-environment-according-to-object-maximums.adoc#planning-your-environment-according-to-object-maximums[Cluster
maximums] for OpenShift v4 is now available.

Use the link:https://access.redhat.com/labs/ocplimitscalculator/[{product-title}
Expand Down

0 comments on commit a57cff9

Please sign in to comment.