Skip to content

Commit

Permalink
followup edits to the scaling and performance merged PR
Browse files Browse the repository at this point in the history
  • Loading branch information
brice committed Apr 12, 2017
1 parent 47a30d0 commit 8f07b1d
Show file tree
Hide file tree
Showing 7 changed files with 44 additions and 30 deletions.
12 changes: 7 additions & 5 deletions scaling_performance/host_practices.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -6,6 +6,8 @@
:icons:
:experimental:

toc::[]

[[scaling-performance-capacity-host-practices-master]]
== Recommended Practices for {product-title} Master Hosts

Expand All @@ -27,7 +29,7 @@ Optimize this traffic path by:
== Recommended Practices for {product-title} Node Hosts

The {product-title} node configuration file at
*_/etc/origin/node/node-config.yaml_* contains important options such as the
*_/etc/origin/node/node-config.yaml_* contains important options, such as the
iptables synchronization period, the Maximum Transmission Unit (MTU) of the SDN network, and the proxy-mode.

The node configuration file allows you to pass arguments to the kubelet
Expand All @@ -53,10 +55,10 @@ Exceeding the `max-pods` values can result in:

[NOTE]
====
In Kubernetes, a pod that is holding a single
container actually uses two containers. The second container is used to set up
networking prior to the actual container starting. Therefore, a system running
10 pods will actually have 20 containers running.
In Kubernetes, a pod that is holding a single container actually uses two
containers. The second container is used to set up networking prior to the
actual container starting. Therefore, a system running 10 pods will actually
have 20 containers running.
====

See the xref:../install_config/install/planning.adoc#sizing[Sizing
Expand Down
10 changes: 5 additions & 5 deletions scaling_performance/index.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -6,12 +6,12 @@
:icons:
:experimental:

This guide provides procedures and examples for how to enhance {product-title}
cluster performance and conduct scaling at different levels of an
{product-title} production stack. It includes recommended practices for
This guide provides procedures and examples for how to enhance your
{product-title} cluster performance and conduct scaling at different levels of
an {product-title} production stack. It includes recommended practices for
building, scaling, and tuning {product-title} clusters.

Tuning considerations vary depending on your cluster setup, and any performance
recommendations in this guide might come with trade-offs.
Tuning considerations can vary depending on your cluster setup, and be advised
that any performance recommendations in this guide might come with trade-offs.


11 changes: 7 additions & 4 deletions scaling_performance/install_practices.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -6,21 +6,24 @@
:icons:
:experimental:

toc::[]

[[scaling-performance-preinstalling-dependencies]]
== Pre-installing Dependencies

A node host will access the network to install any RPMs dependencies, such as
`atomic-openshift-*`, `iptables`, `docker`. Pre-installing these dependencies,
creates a more efficient install, because the RPMs are only accessed when
necessary, instead of a number of times per host during the install.
`atomic-openshift-*`, `iptables`, and `docker`. Pre-installing these
dependencies, creates a more efficient install, because the RPMs are only
accessed when necessary, instead of a number of times per host during the
install.

This is also useful for machines that cannot access the registry for security
purposes.

[[scaling-performance-install-optimization]]
== Ansible Install Optimization

{product-title} is installed using Ansible. Ansible is useful for running
The {product-title} install method uses Ansible. Ansible is useful for running
parallel operations, meaning a fast and efficient installation. However, these
can be improved upon with additional tuning options.

Expand Down
16 changes: 9 additions & 7 deletions scaling_performance/network_optimization.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -6,6 +6,7 @@
:icons:
:experimental:

toc::[]

[[scaling-performance-scaling-router-haproxy]]
== Scaling {product-title} HAProxy Router
Expand All @@ -27,9 +28,9 @@ capable of saturating 1 Gbit NIC at page sizes as small as 8 kB.
The table below shows HTTP keep-alive performance on such a public cloud
instance with a single HAProxy and 100 routes:

[cols="2,3,3"]
[cols="2,3,3",options="header"]
|===
|Encryption |Page size |HTTP(s) requests per second
|*Encryption* |*Page size* |*HTTP(s) requests per second*
|none |1kB |15435
|none |4kB |11947
|edge |1kB |7467
Expand All @@ -47,9 +48,9 @@ overhead is introduced by the virtualization layer in place on public clouds and
holds mostly true for private cloud-based virtualization as well. The following
table is a guide on how many applications to use behind the router:

[cols="2,4"]
[cols="2,4",options="header"]
|===
|Number of applications |Application type
|*Number of applications* |*Application type*
|5-10 |static file/web server or caching proxy
|100-1000 |applications generating dynamic content

Expand All @@ -66,9 +67,10 @@ scale the routing tier.
[[scaling-performance-network-performance]]
== Optimizing Network Performance

The xref:../architecture/additional_concepts.adoc#openshift-sdn[OpenShift SDN] uses OpenvSwitch, virtual extensible LAN (VXLAN) tunnels, OpenFlow rules, and
iptables. This network can be tuned by using jumbo frames, network interface cards (NIC) offloads,
multi-queue, and ethtool settings.
The xref:../architecture/additional_concepts.adoc#openshift-sdn[OpenShift SDN]
uses OpenvSwitch, virtual extensible LAN (VXLAN) tunnels, OpenFlow rules, and
iptables. This network can be tuned by using jumbo frames, network interface
cards (NIC) offloads, multi-queue, and ethtool settings.

VXLAN provides benefits over VLANs, such as an increase in networks from 4096 to
over 16 million, and layer 2 connectivity across physical networks. This allows
Expand Down
12 changes: 7 additions & 5 deletions scaling_performance/optimizing_compute_resources.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -6,17 +6,19 @@
:icons:
:experimental:

toc::[]

[[scaling-performance-overcomitting]]
== Overcommitting

You can use overcommit procedures so that resources such as CPU and memory are
more accessible to the parts of your cluster that need them.

Note that when you overcommit, there is a risk that another application may
not have access to the resources it requires when it needs them, which will
result in reduced performance. However, this may be an acceptable trade-off in
favor of increased density and reduced costs. For example, development, quality assurance (QA), or
test environments may be overcommited, whereas production might not be.
Note that when you overcommit, there is a risk that another application may not
have access to the resources it requires when it needs them, which will result
in reduced performance. However, this may be an acceptable trade-off in favor of
increased density and reduced costs. For example, development, quality assurance
(QA), or test environments may be overcommited, whereas production might not be.

{product-title} implements resource management through the compute resource model and
quota system. See the documentation for more information about the
Expand Down
11 changes: 7 additions & 4 deletions scaling_performance/optimizing_storage.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -6,6 +6,7 @@
:icons:
:experimental:

toc::[]

== Optimizing Storage

Expand All @@ -29,7 +30,9 @@ Using a Loop device back-end can affect performance issues. While you can still
continue to use it, Docker logs a warning message. For example:
----
devmapper: Usage of loopback devices is strongly discouraged for production use. Please use `--storage-opt dm.thinpooldev` or use `man docker` to refer to dm.thinpooldev section.
devmapper: Usage of loopback devices is strongly discouraged for production use.
Please use `--storage-opt dm.thinpooldev` or use `man docker` to refer to
dm.thinpooldev section.
----
====

Expand Down Expand Up @@ -71,9 +74,9 @@ xvdb 202:16 0 20G 0 disk
+
[NOTE]
====
Thin-provisioned volumes are not mounted and have no
file system (individual containers do have an XFS file system), thus they will not
show up in “df” output.
Thin-provisioned volumes are not mounted and have no file system (individual
containers do have an XFS file system), thus they will not show up in “df”
output.
====

. To verify that Docker is using a LVM thin pool, and to monitor disk space
Expand Down
2 changes: 2 additions & 0 deletions scaling_performance/scaling_cluster_metrics.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -6,6 +6,8 @@
:icons:
:experimental:

toc::[]

== Overview

{product-title} exposes metrics that can be collected and stored in back-ends by
Expand Down

0 comments on commit 8f07b1d

Please sign in to comment.