Before installing {product-title}, you must first:
-
See the Prerequisites and Host Preparation topics to prepare your hosts. This includes verifying system and environment requirements per component type and properly installing and configuring the docker service. It also includes installing Ansible version 2.4 or later, as the installation method is based on Ansible playbooks and as such requires directly invoking Ansible.
-
See the Configuring Your Inventory File topic to define your environment and desired {product-title} cluster configuration. This inventory file will be used to initiate the installation, and should be saved and maintained for future cluster upgrades as well.
If you are interested in installing {product-title} using the system container method (required for RHEL Atomic Host systems), see RPM Versus System Container Considerations to ensure that you understand the differences between these methods, then return to this topic to continue.
For large-scale installs, including suggestions for optimizing install time, see the Scaling and Performance Guide.
Note
|
To alternatively install {product-title} solely as a stand-alone registry, see Installing a Stand-alone Registry. |
The installer uses modularized playbooks allowing administrators to install specific components as needed. By breaking up the roles and playbooks, there is better targeting of ad hoc administration tasks. This results in an increased level of control during installations and results in time savings. The playbooks and their ordering are detailed below in Running Individual Component Playbooks.
Important
|
While RHEL Atomic Host is supported for running {product-title} services as system container, the installation method utilizes Ansible, which is not available in RHEL Atomic Host. The RPM-based installer must therefore be run from The host initiating the installation does not need to be intended for inclusion in the {product-title} cluster, but it can be. Alternatively, a containerized version of the installer is available as a system container, which can be run from a RHEL Atomic Host system. |
After you have configured Ansible by defining an inventory file in /etc/ansible/hosts, run the installation playbook via Ansible using either the RPM-based or containerized installer.
The RPM-based installer uses Ansible installed via RPM packages to run playbooks and configuration files available on the local host.
Important
|
Do not run OpenShift Ansible playbooks under |
To run the RPM-based installer:
-
Run the prequisites.yml playbook. This must be run only once before deploying a new cluster. Use the following command, specifying
-i
if your inventory file located somewhere other than /etc/ansible/hosts: -
Run the deploy_cluster.yml playbook to initiate the cluster installation:
If for any reason the installation fails, before re-running the installer, see Known Issues to check for any specific instructions or workarounds.
Warning
|
The installer caches playbook configuration values for 10 minutes, by default. If you change any system, network, or inventory configuration,
and then re-run the installer within that 10 minute period, the new values are not used, and the previous values are used instead.
You can delete the contents of the cache, which is defined
by the |
The image is a containerized version of the {product-title} installer. This installer image provides the same functionality as the RPM-based installer, but it runs in a containerized environment that provides all of its dependencies rather than being installed directly on the host. The only requirement to use it is the ability to run a container.
The installer image can be used as a system container. System containers are stored and run outside of the traditional docker service. This enables running the installer image from one of the target hosts without concern for the install restarting docker on the host.
To use the Atomic CLI to run the installer as a run-once system container, perform the following steps as the root user:
-
Run the prerequisites.yml playbook:
# atomic install --system \ --storage=ostree \ --set INVENTORY_FILE=/path/to/inventory \ (1) --set PLAYBOOK_FILE=/usr/share/ansible/openshift-ansible/playbooks/prerequisites.yml \ --set OPTS="-v" \
-
Specify the location on the local host for your inventory file.
This command runs a set of prerequiste tasks by using the inventory file specified and the
root
user’s SSH configuration.
-
-
Run the deploy_cluster.yml playbook:
# atomic install --system \ --storage=ostree \ --set INVENTORY_FILE=/path/to/inventory \ (1) --set PLAYBOOK_FILE=/usr/share/ansible/openshift-ansible/playbooks/deploy_cluster.yml \ --set OPTS="-v" \
-
Specify the location on the local host for your inventory file.
This command initiates the cluster installation by using the inventory file specified and the
root
user’s SSH configuration. It logs the output on the terminal and also saves it in the /var/log/ansible.log file. The first time this command is run, the image is imported into OSTree storage (system containers use this rather than docker daemon storage). On subsequent runs, it reuses the stored image.If for any reason the installation fails, before re-running the installer, see Known Issues to check for any specific instructions or workarounds.
-
You can use the PLAYBOOK_FILE
environment variable to specify other playbooks
you want to run by using the containerized installer. The default value of the PLAYBOOK_FILE
is
/usr/share/ansible/openshift-ansible/playbooks/deploy_cluster.yml, which is the
main cluster installation playbook, but you can set it to the path of another
playbook inside the container.
For example, to run the pre-install checks playbook before installation, use the following command:
# atomic install --system \ --storage=ostree \ --set INVENTORY_FILE=/path/to/inventory \ --set PLAYBOOK_FILE=/usr/share/ansible/openshift-ansible/playbooks/openshift-checks/pre-install.yml \ (1) --set OPTS="-v" \ (2)
-
Set
PLAYBOOK_FILE
to the full path of the playbook starting at the playbooks/ directory. Playbooks are located in the same locations as with the RPM-based installer. -
Set
OPTS
to add command line options toansible-playbook
.
The installer image can also run as a docker container anywhere that docker can run.
Warning
|
This method must not be used to run the installer on one of the hosts being configured, as the install may restart docker on the host, disrupting the installer container execution. |
Note
|
Although this method and the system container method above use the same image, they run with different entry points and contexts, so runtime parameters are not the same. |
At a minimum, when running the installer as a docker container you must provide:
-
SSH key(s), so that Ansible can reach your hosts.
-
An Ansible inventory file.
-
The location of the Ansible playbook to run against that inventory.
Here is an example of how to run an install via docker
, which must be run by a
non-root user with access to docker
:
-
First, run the prerequisites.yml playbook:
$ docker run -t -u `id -u` \ (1) -v $HOME/.ssh/id_rsa:/opt/app-root/src/.ssh/id_rsa:Z \ (2) -v $HOME/ansible/hosts:/tmp/inventory:Z \ (3) -e INVENTORY_FILE=/tmp/inventory \ (3) -e PLAYBOOK_FILE=playbooks/prerequisites.yml \ (4) -e OPTS="-v" \ (5)
-
-u `id -u`
makes the container run with the same UID as the current user, which allows that user to use the SSH key inside the container (SSH private keys are expected to be readable only by their owner). -
-v $HOME/.ssh/id_rsa:/opt/app-root/src/.ssh/id_rsa:Z
mounts your SSH key ($HOME/.ssh/id_rsa
) under the container user’s$HOME/.ssh
(/opt/app-root/src is the$HOME
of the user in the container). If you mount the SSH key into a non-standard location you can add an environment variable with-e ANSIBLE_PRIVATE_KEY_FILE=/the/mount/point
or setansible_ssh_private_key_file=/the/mount/point
as a variable in the inventory to point Ansible at it. Note that the SSH key is mounted with the:Z
flag. This is required so that the container can read the SSH key under its restricted SELinux context. This also means that your original SSH key file will be re-labeled to something likesystem_u:object_r:container_file_t:s0:c113,c247
. For more details about:Z
, check thedocker-run(1)
man page. Keep this in mind when providing these volume mount specifications because this might have unexpected consequences: for example, if you mount (and therefore re-label) your whole$HOME/.ssh
directory it will block the host’s sshd from accessing your public keys to login. For this reason you may want to use a separate copy of the SSH key (or directory), so that the original file labels remain untouched. -
-v $HOME/ansible/hosts:/tmp/inventory:Z
and-e INVENTORY_FILE=/tmp/inventory
mount a static Ansible inventory file into the container as /tmp/inventory and set the corresponding environment variable to point at it. As with the SSH key, the inventory file SELinux labels may need to be relabeled by using the:Z
flag to allow reading in the container, depending on the existing label (for files in a user$HOME
directory this is likely to be needed). So again you may prefer to copy the inventory to a dedicated location before mounting it. The inventory file can also be downloaded from a web server if you specify theINVENTORY_URL
environment variable, or generated dynamically usingDYNAMIC_SCRIPT_URL
to specify an executable script that provides a dynamic inventory. -
-e PLAYBOOK_FILE=playbooks/prerequisites.yml
specifies the playbook to run (in this example, the prereqsuites playbook) as a relative path from the top level directory of openshift-ansible content. The full path from the RPM can also be used, as well as the path to any other playbook file in the container. -
-e OPTS="-v"
supplies arbitrary command line options (in this case,-v
to increase verbosity) to theansible-playbook
command that runs inside the container.
-
-
Next, run the deploy_cluster.yml playbook to initiate the cluster installation:
$ docker run -t -u `id -u` \ -v $HOME/.ssh/id_rsa:/opt/app-root/src/.ssh/id_rsa:Z \ -v $HOME/ansible/hosts:/tmp/inventory:Z \ -e INVENTORY_FILE=/tmp/inventory \ -e PLAYBOOK_FILE=playbooks/deploy_cluster.yml \ -e OPTS="-v" \
The main installation playbook {pb-prefix}playbooks/deploy_cluster.yml runs a set of individual component playbooks in a specific order, and the installer reports back at the end what phases you have gone through. If the installation fails during a phase, you are notified on the screen along with the errors from the Ansible run.
After you resolve the issue, rather than run the entire installation over again, you can pick up from the failed phase. You must then run each of the remaining playbooks in order:
# ansible-playbook [-i /path/to/inventory] <playbook_file_location>
The following table is sorted in order of when each individual component playbook is run:
Playbook Name | File Location |
---|---|
Health Check |
{pb-prefix}playbooks/openshift-checks/pre-install.yml |
etcd Install |
{pb-prefix}playbooks/openshift-etcd/config.yml |
NFS Install |
{pb-prefix}playbooks/openshift-nfs/config.yml |
Load Balancer Install |
{pb-prefix}playbooks/openshift-loadbalancer/config.yml |
Master Install |
{pb-prefix}playbooks/openshift-master/config.yml |
Master Additional Install |
{pb-prefix}playbooks/openshift-master/additional_config.yml |
Node Install |
{pb-prefix}playbooks/openshift-node/config.yml |
GlusterFS Install |
{pb-prefix}playbooks/openshift-glusterfs/config.yml |
Hosted Install |
{pb-prefix}playbooks/openshift-hosted/config.yml |
Web Console Install |
{pb-prefix}playbooks/openshift-web-console/config.yml |
Metrics Install |
{pb-prefix}playbooks/openshift-metrics/config.yml |
Logging Install |
{pb-prefix}playbooks/openshift-logging/config.yml |
Prometheus Install |
{pb-prefix}playbooks/openshift-prometheus/config.yml |
Service Catalog Install |
{pb-prefix}playbooks/openshift-service-catalog/config.yml |
Management Install |
{pb-prefix}playbooks/openshift-management/config.yml |
After the installation completes:
-
Verify that the master is started and nodes are registered and reporting in Ready status. On the master host, run the following as root:
# oc get nodes NAME STATUS ROLES AGE VERSION master.example.com Ready master 7h v1.9.1+a0ce1bc657 node1.example.com Ready compute 7h v1.9.1+a0ce1bc657 node2.example.com Ready compute 7h v1.9.1+a0ce1bc657
-
To verify that the web console is installed correctly, use the master host name and the web console port number to access the web console with a web browser.
For example, for a master host with a host name of
master.openshift.com
and using the default port of8443
, the web console would be found athttps://master.openshift.com:8443/console
.
If you installed multiple etcd hosts:
-
First, verify that the etcd package, which provides the
etcdctl
command, is installed:# yum install etcd
-
On a master host, verify the etcd cluster health, substituting for the FQDNs of your etcd hosts in the following:
# etcdctl -C \ https://etcd1.example.com:2379,https://etcd2.example.com:2379,https://etcd3.example.com:2379 \ --ca-file=/etc/origin/master/master.etcd-ca.crt \ --cert-file=/etc/origin/master/master.etcd-client.crt \ --key-file=/etc/origin/master/master.etcd-client.key cluster-health
-
Also verify the member list is correct:
# etcdctl -C \ https://etcd1.example.com:2379,https://etcd2.example.com:2379,https://etcd3.example.com:2379 \ --ca-file=/etc/origin/master/master.etcd-ca.crt \ --cert-file=/etc/origin/master/master.etcd-client.crt \ --key-file=/etc/origin/master/master.etcd-client.key member list
If you installed multiple masters using HAProxy as a load balancer, browse to the following URL according to your [lb] section definition and check HAProxy’s status:
http://<lb_hostname>:9000
You can verify your installation by consulting the HAProxy Configuration documentation.
Running docker build
is a privileged process, so the container has more access
to the node than might be considered acceptable in some multi-tenant
environments. If you do not trust your users, you can use a more secure option
at the time of installation. Disable Docker builds on the cluster and require
that users build images outside of the cluster. See
Securing
Builds by Strategy for more information on this optional process.
You can uninstall {product-title} hosts in your cluster by running the uninstall.yml playbook. This playbook deletes {product-title} content installed by Ansible, including:
-
Configuration
-
Containers
-
Default templates and image streams
-
Images
-
RPM packages
The playbook will delete content for any hosts defined in the inventory file that you specify when running the playbook. If you want to uninstall {product-title} across all hosts in your cluster, run the playbook using the inventory file you used when installing {product-title} initially or ran most recently:
You can also uninstall node components from specific hosts using the uninstall.yml playbook while leaving the remaining hosts and cluster alone:
Warning
|
This method should only be used when attempting to uninstall specific node hosts and not for specific masters or etcd hosts, which would require further configuration changes within the cluster. |
-
First follow the steps in Deleting Nodes to remove the node object from the cluster, then continue with the remaining steps in this procedure.
-
Create a different inventory file that only references those hosts. For example, to only delete content from one node:
[OSEv3:children] nodes (1) [OSEv3:vars] ansible_ssh_user=root [nodes] node3.example.com openshift_node_labels="{'region': 'primary', 'zone': 'west'}" (2)
-
Only include the sections that pertain to the hosts you are interested in uninstalling.
-
Only include hosts that you want to uninstall.
-
-
Specify that new inventory file using the
-i
option when running the uninstall.yml playbook:
When the playbook completes, all {product-title} content should be removed from any specified hosts.
-
On failover in multiple master clusters, it is possible for the controller manager to overcorrect, which causes the system to run more pods than what was intended. However, this is a transient event and the system does correct itself over time. See kubernetes/kubernetes#10030 for details.
-
On failure of the Ansible installer, you must start from a clean operating system installation. If you are using virtual machines, start from a fresh image. If you are using bare metal machines, see Uninstalling {product-title} for instructions.
Now that you have a working {product-title} instance, you can:
-
Deploy an integrated Docker registry.
-
Deploy a router.