After getting to know the basics of configuring your own inventory file, you can review the following example inventories which describe various environment topographies, including using multiple masters for high availability. You can choose an example that matches your requirements, modify it to match your own environment, and use it as your inventory file when running the installation.
You can configure an environment with a single master and multiple nodes, and either a single or multiple number of external etcd hosts.
Note
|
Moving from a single master cluster to multiple masters after installation is not supported. |
The following table describes an example environment for a single
master
(with a single etcd on the same host), two
nodes
for hosting user applications, and two nodes with the region=infra
label for hosting
dedicated infrastructure:
Host Name | Infrastructure Component to Install |
---|---|
master.example.com |
Master, etcd, and node |
node1.example.com |
Node |
node2.example.com |
|
infra-node1.example.com |
Node (with |
infra-node2.example.com |
You can see these example hosts present in the [masters], [etcd], and [nodes] sections of the following example inventory file:
# Create an OSEv3 group that contains the masters, nodes, and etcd groups [OSEv3:children] masters nodes etcd # Set variables common for all OSEv3 hosts [OSEv3:vars] # SSH user, this user should allow ssh based auth without requiring a password ansible_ssh_user=root # If ansible_ssh_user is not root, ansible_become must be set to true #ansible_become=true # uncomment the following to enable htpasswd authentication; defaults to DenyAllPasswordIdentityProvider #openshift_master_identity_providers=[{'name': 'htpasswd_auth', 'login': 'true', 'challenge': 'true', 'kind': 'HTPasswdPasswordIdentityProvider'}] # host group for masters [masters] master.example.com # host group for etcd [etcd] master.example.com # host group for nodes, includes region info [nodes] master.example.com node1.example.com openshift_node_labels="{'region': 'primary', 'zone': 'east'}" node2.example.com openshift_node_labels="{'region': 'primary', 'zone': 'west'}" infra-node1.example.com openshift_node_labels="{'region': 'infra', 'zone': 'default'}" infra-node2.example.com openshift_node_labels="{'region': 'infra', 'zone': 'default'}"
Important
|
See Configuring Node Host Labels to ensure you understand the default node selector requirements and node label considerations beginning in {product-title} 3.9. |
To use this example, modify the file to match your environment and specifications, and save it as /etc/ansible/hosts.
The following table describes an example environment for a single
master,
three
etcd
hosts, two
nodes
for hosting user applications, and two nodes with the region=infra
label for hosting
dedicated infrastructure:
Host Name | Infrastructure Component to Install |
---|---|
master.example.com |
Master and node |
etcd1.example.com |
etcd |
etcd2.example.com |
|
etcd3.example.com |
|
node1.example.com |
Node |
node2.example.com |
|
infra-node1.example.com |
Node (with |
infra-node2.example.com |
You can see these example hosts present in the [masters], [nodes], and [etcd] sections of the following example inventory file:
# Create an OSEv3 group that contains the masters, nodes, and etcd groups [OSEv3:children] masters nodes etcd # Set variables common for all OSEv3 hosts [OSEv3:vars] ansible_ssh_user=root # uncomment the following to enable htpasswd authentication; defaults to DenyAllPasswordIdentityProvider #openshift_master_identity_providers=[{'name': 'htpasswd_auth', 'login': 'true', 'challenge': 'true', 'kind': 'HTPasswdPasswordIdentityProvider'}] # host group for masters [masters] master.example.com # host group for etcd [etcd] etcd1.example.com etcd2.example.com etcd3.example.com # host group for nodes, includes region info [nodes] master.example.com node1.example.com openshift_node_labels="{'region': 'primary', 'zone': 'east'}" node2.example.com openshift_node_labels="{'region': 'primary', 'zone': 'west'}" infra-node1.example.com openshift_node_labels="{'region': 'infra', 'zone': 'default'}" infra-node2.example.com openshift_node_labels="{'region': 'infra', 'zone': 'default'}"
To use this example, modify the file to match your environment and specifications, and save it as /etc/ansible/hosts.
You can configure an environment with multiple masters, multiple etcd hosts, and multiple nodes. Configuring multiple masters for high availability (HA) ensures that the cluster has no single point of failure.
Note
|
Moving from a single master cluster to multiple masters after installation is not supported. |
When configuring multiple masters, the cluster installation process supports the
native
high availability (HA) method. This method leverages the native HA
master capabilities built into {product-title} and can be combined with any load
balancing solution.
If a host is defined in the [lb] section of the inventory file, Ansible installs and configures HAProxy automatically as the load balancing solution. If no host is defined, it is assumed you have pre-configured an external load balancing solution of your choice to balance the master API (port 8443) on all master hosts.
Note
|
This HAProxy load balancer is intended to demonstrate the API server’s HA mode and is not recommended for production environments. If you are deploying to a cloud provider, Red Hat recommends deploying a cloud-native TCP-based load balancer or take other steps to provide a highly available load balancer. |
For an external load balancing solution, you must have:
-
A pre-created load balancer virtual IP (VIP) configured for SSL passthrough.
-
A VIP listening on the port specified by the
openshift_master_api_port
value (8443 by default) and proxying back to all master hosts on that port. -
A domain name for VIP registered in DNS.
-
The domain name will become the value of both
openshift_master_cluster_public_hostname
andopenshift_master_cluster_hostname
in the {product-title} installer.
-
See the External Load Balancer Integrations example in Github for more information. For more on the high availability master architecture, see Kubernetes Infrastructure.
Note
|
The cluster installation process does not currently support multiple HAProxy load balancers in an active-passive setup. See the Load Balancer Administration documentation for post-installation amendments. |
To configure multiple masters, refer to Multiple Masters with Multiple etcd
The following describes an example environment for three
masters
using the native
HA method:, one HAProxy load balancer, three
etcd
hosts, two
nodes
for hosting user applications, and two nodes with the region=infra
label for hosting
dedicated infrastructure:
Host Name | Infrastructure Component to Install |
---|---|
master1.example.com |
Master (clustered using native HA) and node |
master2.example.com |
|
master3.example.com |
|
lb.example.com |
HAProxy to load balance API master endpoints |
etcd1.example.com |
etcd |
etcd2.example.com |
|
etcd3.example.com |
|
node1.example.com |
Node |
node2.example.com |
|
infra-node1.example.com |
Node (with |
infra-node2.example.com |
You can see these example hosts present in the [masters], [etcd], [lb], and [nodes] sections of the following example inventory file:
# Create an OSEv3 group that contains the master, nodes, etcd, and lb groups. # The lb group lets Ansible configure HAProxy as the load balancing solution. # Comment lb out if your load balancer is pre-configured. [OSEv3:children] masters nodes etcd lb # Set variables common for all OSEv3 hosts [OSEv3:vars] ansible_ssh_user=root # Uncomment the following to enable htpasswd authentication; defaults to # DenyAllPasswordIdentityProvider. #openshift_master_identity_providers=[{'name': 'htpasswd_auth', 'login': 'true', 'challenge': 'true', 'kind': 'HTPasswdPasswordIdentityProvider'}] # Native high availbility cluster method with optional load balancer. # If no lb group is defined installer assumes that a load balancer has # been preconfigured. For installation the value of # openshift_master_cluster_hostname must resolve to the load balancer # or to one or all of the masters defined in the inventory if no load # balancer is present. openshift_master_cluster_method=native openshift_master_cluster_hostname=openshift-internal.example.com openshift_master_cluster_public_hostname=openshift-cluster.example.com # apply updated node defaults openshift_node_kubelet_args={'pods-per-core': ['10'], 'max-pods': ['250'], 'image-gc-high-threshold': ['90'], 'image-gc-low-threshold': ['80']} # enable ntp on masters to ensure proper failover openshift_clock_enabled=true # host group for masters [masters] master1.example.com master2.example.com master3.example.com # host group for etcd [etcd] etcd1.example.com etcd2.example.com etcd3.example.com # Specify load balancer host [lb] lb.example.com # host group for nodes, includes region info [nodes] master[1:3].example.com node1.example.com openshift_node_labels="{'region': 'primary', 'zone': 'east'}" node2.example.com openshift_node_labels="{'region': 'primary', 'zone': 'west'}" infra-node1.example.com openshift_node_labels="{'region': 'infra', 'zone': 'default'}" infra-node2.example.com openshift_node_labels="{'region': 'infra', 'zone': 'default'}"
To use this example, modify the file to match your environment and specifications, and save it as /etc/ansible/hosts.
The following describes an example environment for three
masters
using the native
HA method (with
etcd
on each host), one HAProxy load balancer, two
nodes
for hosting user applications, and two nodes with the region=infra
label for hosting
dedicated infrastructure:
Host Name | Infrastructure Component to Install |
---|---|
master1.example.com |
Master (clustered using native HA) and node with etcd on each host |
master2.example.com |
|
master3.example.com |
|
lb.example.com |
HAProxy to load balance API master endpoints |
node1.example.com |
Node |
node2.example.com |
|
infra-node1.example.com |
Node (with |
infra-node2.example.com |
You can see these example hosts present in the [masters], [etcd], [lb], and [nodes] sections of the following example inventory file:
# Create an OSEv3 group that contains the master, nodes, etcd, and lb groups. # The lb group lets Ansible configure HAProxy as the load balancing solution. # Comment lb out if your load balancer is pre-configured. [OSEv3:children] masters nodes etcd lb # Set variables common for all OSEv3 hosts [OSEv3:vars] ansible_ssh_user=root # Uncomment the following to enable htpasswd authentication; defaults to # DenyAllPasswordIdentityProvider. #openshift_master_identity_providers=[{'name': 'htpasswd_auth', 'login': 'true', 'challenge': 'true', 'kind': 'HTPasswdPasswordIdentityProvider'}] # Native high availability cluster method with optional load balancer. # If no lb group is defined installer assumes that a load balancer has # been preconfigured. For installation the value of # openshift_master_cluster_hostname must resolve to the load balancer # or to one or all of the masters defined in the inventory if no load # balancer is present. openshift_master_cluster_method=native openshift_master_cluster_hostname=openshift-internal.example.com openshift_master_cluster_public_hostname=openshift-cluster.example.com # host group for masters [masters] master1.example.com master2.example.com master3.example.com # host group for etcd [etcd] master1.example.com master2.example.com master3.example.com # Specify load balancer host [lb] lb.example.com # host group for nodes, includes region info [nodes] master[1:3].example.com node1.example.com openshift_node_labels="{'region': 'primary', 'zone': 'east'}" node2.example.com openshift_node_labels="{'region': 'primary', 'zone': 'west'}" infra-node1.example.com openshift_node_labels="{'region': 'infra', 'zone': 'default'}" infra-node2.example.com openshift_node_labels="{'region': 'infra', 'zone': 'default'}"
To use this example, modify the file to match your environment and specifications, and save it as /etc/ansible/hosts.