Skip to content

Latest commit

 

History

History
533 lines (409 loc) · 16.9 KB

bb0.adoc

File metadata and controls

533 lines (409 loc) · 16.9 KB

Building Block 0 - Validate and prepare infrastructure

This Building Block is targeted to verify a given environment before attempting to install Red Hat OpenShift.

While running this Building Block the following technical prerequisites will be verified

  • each system having at least 4 CPU-Cores ( physical or virtual )

  • each system having at least 16GB RAM

  • each system having at least 20GB of Disk Storage under /

  • each system having at least 40GB of Disk Storage under /var

  • each system having at least 15GB of Disk Storage under empty device for container storage (e.g. /dev/vdb)

  • if used, each system that need to host OCS having at least 100GB of raw disk (e.g. /dev/vdc)

Hardware requirements are based on official OCP documentation

Prerequisites for this Building Block

To be able to run this Building Block we recommend to prepare the following systems for the default Flavor (Standard):

  • 1 RHEL Server as Bastion incl. HAProxy Load Balancer

  • 3 RHEL Servers as Red Hat Openshift Master incl. Infrastructure Containers

  • 3 RHEL Servers as Red Hat Openshift Nodes incl. OCS Containers

These systems can be bare metal or virtual server running on any hypervisor.

Topology can be modified in terms of Flavors:

  • Standard: default flavor, 7 nodes

    • 1 Bastion

    • 3 Masters

    • 3 Nodes: contains OCS

  • Mini: 4 nodes

    • 1 Bastion

    • 1 Master: OCS node 1

    • 1 Infranode: OCS node 2

    • 1 Node: OCS node 3

  • Full: 10 nodes

    • 1 Bastion

    • 3 Masters

    • 3 Infranodes: contains OCS

    • 3 Nodes

Note
Please be aware that currently Red Hat Openshift expects a minimum of 4 CPU Cores and 16GB of RAM per node.

Steps to Perform for this Building Block

Download Scripts and Playbooks

With the YAML Configuration file ready, you will now have to download the scripts and Ansible Playbooks. To do so, please run the following commands on your Bastion server as privileged user:

$ sudo -i
# curl -o stc.tgz  -L https://github.com/RedHat-EMEA-SSA-Team/stc/archive/latest.tar.gz
# tar xvf stc.tgz --strip 1
# chmod +x setup.sh

Prepare Configuration File

As a first step, you need to create a configuration YAML file named env.yml, describing your environment. Based on this file, this Building Block will perform a set of verification steps.

To make the creation of this configuration YAML file as easy as possible, we provided a simple script at STC Installer for you:

Please fill in the following information into the script when prompted, default value are inside []

  • Hardware Requirements: If your infrastructure respect hardware requirements

  • OCP Version: Select OpenShift version, default is 3.10

  • Cluster hostname (API DNS): Hostname under which your Red Hat Openshift cluster will be accessible after installation Wilcard DNS for Apps: DNS-Record as entered in your DNS Server to catch all future applications running on Red Hat Openshift

  • Cluster Topology: Select STC flavor, standard (default), mini or full

  • Bastion hostname: The hostname of jumphost to run validation and installation

  • Master hostname: 1 or 3, depends on flavor

  • Infranode hostname: 1 or 3, available only for mini or full flavor

  • Nodes hostname: 1 or 3, depends on flavor

  • Proxy support: Enable proxy access for OpenShift services, Docker, Git and environments configuration

  • Internet proxy HTTP: Hostname and Port of your HTTP for outside connectivity.

  • Internet proxy HTTPS: Hostname and Port of your HTTP Proxy, if one is needed for outside connectivity.

  • No proxy: Select host to skip for proxing (you can skip localhost and .svc as they will be automatically added)

  • Proxy username: Username in case you need to authenticate with the Proxies defined in the previous fields.

  • Proxy password: Password in case you need to authenticate with the Proxies defined in the previous fields.

  • Remote SSH user: Remote user which is used to access all hosts used for Openshift. Must have sudo access on hosts. Defaults to root.

  • How do you manage your subscription and repositories: Select RHSM for direct access to Red Hat CDN for Subscription Management or select Satellite if present

  • RHSM Username: Select your RHSM Username, password will be asked and stored in Ansible Vault later

  • Satellite Org ID and Activation Key: Provide an Org ID and an Acivation Key which containers OpenShift required repositories through a properly configured Content View

The content of this YAML will be generated in the stc directory and used by the installer, if you agreed to proceed with y option, the validation will start:

Generated configuration:

********************* STC Conf file *********************
ocp_version: 3.11
api_dns: openshift.example.com
apps_dns: apps.example.com
bastion: bastion.example.com
lb: bastion.example,com
masters:
- master01.example.com
- master02.example.com
- master03.example.com
nodes:
- node01.example.com
- node02.example.com
- node03.example.com
proxy_http: http://proxy.example.com:3128
proxy_https: http://proxy.example.com:3128
proxy_no: proxy.example.com
cns:
- master01.example.com
- master02.example.com
- master03.example.com
ssh_user: cloud-user
subscription_activationkey: ocp39
subscription_org_id: RedHat
****************** End STC Conf file ********************

Do you want to proceed?
y n

Other Example with standard STC topology (nodes contains router and registry):

ocp_version: 3.11
lb: bastion
bastion: bastion
masters:
- master01
- master02
- master03
nodes:
- node01
- node02
- node03
ssh_user: cloud-user
apps_dns: apps.your-ip.nip.io
api_dns: master.your-ip.nip.io
rhn_username: username

Example with smaller topology and infranodes, with version 3.10

ocp_version: 3.10
bastion: bastion
masters:
- master01
infranodes:
- infranode01
nodes:
- node01
ssh_user: cloud-user
proxy_http: 'http://proxy.company.local:3128'
proxy_https: 'http://proxy.company.local:3128'
proxy_no: 'satellite.company.local'
apps_dns: apps.company.local
api_dns: master01.company.local
rhn_username: username

Setup bastion host and validate configuration

In this step, we will be using a script to

  • prepare the Bastion system

  • verify the correctness of the created YAML Configuration file

To do so, please run the following command on your Bastion server as root or as sudoers user.

./setup.sh

The script will ask you:

  • Which version of OpenShift to prepare for prerequisites and verify, defaults to 3.10

  • Which type of Subscription management to use in order to register hosts, default is RHSM (need access to Red Hat CDN), and also Satellite giving an Organization ID and an Activation Key

After this it will start registering Bastion host and start the validation across nodes, preparing an inventory file to be used to install OCP later on.

 ____ _____ ____
/ ___|_   _/ ___|
\___ \ | || |
 ___) || || |___
|____/ |_| \____|



Welcome to STC OpenShift Installation Validator
Defaults value are shown in []

Please select OCP Version to install: 3.11, 3.10
[3.11] 3.10

*** selected 3.11

Please insert Cluster hostname (API DNS):
openshift.example.com
Please insert Wilcard DNS for Apps:
apps.example.com

Cluster Topology Setup

Please select STC Flavor
[standard] mini full

Selected standard Flavor

Please insert Bastion Node hostname:
bastion.example.com

Please insert Master 1 hostname:
master01.example.com
Please insert Master 2 hostname:
master02.example.com
Please insert Master 3 hostname:
master03.example.com


Please insert Node 1 hostname:
node01.example.com
Please insert Node 2 hostname:
node02.example.com
Please insert Node 3 hostname:
node03.example.com

Is there any Proxy to use for OpenShift and Container Runtime?
y [n]
y
Please insert HTTP Proxy:
http://proxy.example.com:3128
Please insert HTTPS Proxy:
http://proxy.example.com:3128
Please insert No Proxy (leave blank if any, automatically adding localhost,127.0.0.1,.svc)
proxy.example.com
Please insert Proxy Username (leave blank if any)

Please insert Proxy Password (leave blank if any)


Please insert SSH username to be used by Ansible:
cloud-user
Please select Subscription management: RHSM or Satellite
[rhsm] satellite
satellite
*** registering host to Satellite
Please insert Organization ID:
RedHat

Please insert Activation Key:
ocp39


Generated configuration:

********************* STC Conf file *********************
ocp_version: 3.11
api_dns: openshift.example.com
apps_dns: apps.example.com
bastion: bastion.example.com
lb: bastion.example,com
masters:
- master01.example.com
- master02.example.com
- master03.example.com
nodes:
- node01.example.com
- node02.example.com
- node03.example.com
proxy_http: http://proxy.example.com:3128
proxy_https: http://proxy.example.com:3128
proxy_no: proxy.example.com
cns:
- node01.example.com
- node02.example.com
- node03.example.com
ssh_user: cloud-user
subscription_activationkey: ocp39
subscription_org_id: RedHat
****************** End STC Conf file ********************

Do you want to proceed?
y n


PLAY RECAP *********************************************************************
localhost                  : ok=1    changed=0    unreachable=0    failed=0

You will also be asked to provide a password to ssh into the 7 systems and for a password, which will be used to encrypt all given passwords during installation and later steps.

Test Ansible inventory and public key authentication

To verify that our previous steps worked and that the public keys have all been successfully transfered to the 7 systems, please run the following

ansible -i inventory all -m ping

you should get the following output

master01.example.com | SUCCESS => {
    "changed": false,
    "ping": "pong"
}
master02.example.com | SUCCESS => {
    "changed": false,
    "ping": "pong"
}
master03.example.com | SUCCESS => {
    "changed": false,
    "ping": "pong"
}
node01.example.com | SUCCESS => {
    "changed": false,
    "ping": "pong"
}
node02.example.com | SUCCESS => {
    "changed": false,
    "ping": "pong"
}
node03.example.com | SUCCESS => {
    "changed": false,
    "ping": "pong"
}
bastion.example.com | SUCCESS => {
    "changed": false,
    "ping": "pong"
}
[root@localhost ocppoc]#

Validate nodes and external connections for OCP

In the final step, we will run the real validation. To do so, please run

ansible-playbook -i inventory --ask-vault-pass playbooks/validate.yml

If all steps perform without raising an error, then you are ready to proceed and install Red Hat Openshift.

PLAY [Validate environment] ****************************************************

PLAY [Verify subcription and subscribe nodes] **********************************

TASK [Check Red Hat subscription] **********************************************
< output removed >

TASK [Disable all repos] *******************************************************
< output removed >

TASK [Enable correct repos] ****************************************************
< output removed >

PLAY [Check supported Operating Systems] ***************************************

TASK [Gathering Facts] *********************************************************
< output removed >

TASK [assert] ******************************************************************
< output removed >

PLAY [Check connectivity to whitelisted hosts] *********************************

TASK [Ping proxy whitelisted sites] ********************************************
< output removed >

TASK [Check download speed] ****************************************************
< output removed >

TASK [set_fact] ****************************************************************
< output removed >

TASK [debug] *******************************************************************
< output removed >

TASK [Ensude nc is installed] **************************************************
< output removed >

TASK [Start nc -l to all valid ports] ******************************************
< output removed >

PLAY [Check all ports from bastion] ********************************************

TASK [Check that all needed ports are open] ************************************
< output removed >

TASK [Ensure nc absent] ********************************************************
< output removed >

PLAY [Vadiate that selinux is in place] ****************************************

TASK [check if selinux is running and enforced] ********************************
< output removed >

PLAY [Identify the space available in] *****************************************

TASK [command] *****************************************************************
< output removed >

TASK [Set root disk facts] *****************************************************
< output removed >

TASK [Fail if there is not enough space available in /] ************************
< output removed >

PLAY [Check if Network Manager is running] *************************************

TASK [Ensure that NetworkManager is running] ***********************************
< output removed >

TASK [Report status of Network Manager] ****************************************
< output removed >

PLAY [Prepare install and validate docker] *************************************

TASK [Gathering Facts] *********************************************************
< output removed >

TASK [docker_setup : setup] ****************************************************
< output removed >

TASK [docker_setup : Figure out device reserved for docker] ********************
< output removed >

TASK [docker_setup : set_fact] *************************************************
< output removed >

TASK [docker_setup : Ensure docker installed] **********************************
< output removed >

TASK [docker_setup : Detect Docker storage configuration status] ***************
< output removed >

TASK [docker_setup : Create docker storage configuration] **********************
< output removed >

TASK [docker_setup : Apply Docker storage configuration changes] ***************
< output removed >

TASK [docker_setup : Fail if Docker version is < 1.12] *************************
< output removed >

TASK [docker_setup : Enable docker] ********************************************
< output removed >

TASK [docker_setup : Start docker] *********************************************
< output removed >

TASK [docker_validation : Pull some basic docker images] ***********************
< output removed >

PLAY RECAP *********************************************************************
bastion                    : ok=8    changed=0    unreachable=0    failed=0
localhost                  : ok=1    changed=0    unreachable=0    failed=0
master01                   : ok=28   changed=8    unreachable=0    failed=0
master02                   : ok=28   changed=8    unreachable=0    failed=0
master03                   : ok=28   changed=8    unreachable=0    failed=0
node01                     : ok=28   changed=8    unreachable=0    failed=0
node02                     : ok=28   changed=8    unreachable=0    failed=0
node03                     : ok=28   changed=8    unreachable=0    failed=0

Hacking Building Block Flavors

It would be still possible to use a free topology schema editing the env.yml for changes such as more nodes or external load balancer

OpenStack provisioner

How to use the OpenStack provisioner for openstack infrastructure provisioning. If you need more flexibilty please consider: OCP documentation

Just-for-informations if you like to actived the openstack cloud provider: A cloud provide change the nodename to the name of the instance. It means it is strongly recommanded that instance name on openstack, openshift nodename and dns name is the same and use FQDN!

Note

Please take care that the quotas are set correctly, for example:openstack quota set --secgroups 100 --volumes 100 --ram 62520 --cores 50 admin

docker run -ti $(pwd):/work:z quay.io/redhat/stc-openstack-provisioner

export OS_USERNAME=admin
export OS_PASSWORD=xxxx
export OS_AUTH_URL=xxxx
export OS_PROJECT_NAME=admin
export OS_USER_DOMAIN_NAME=Default
export OS_PROJECT_DOMAIN_NAME=Default
export OS_IDENTITY_API_VERSION=3

export STC_RHN_PASSWORD=xxxx
export STC_RHN_USERNAME=xxx
export STC_SUBSCRIPTION_POOL_ID=xx
export STC_REGISTRY_TOKEN_USER=xxx
export STC_REGISTRY_TOKEN=xxxx
export STC_FLAVOR=mini  # Only supported flavor at the moment is mini
export STC_IAAS_MACHINE_SIZE=m1.xlarge # m1.xlarge is Default
export STC_IAAS_CONTAINER_STORAGE_DISK=15 # Size in GB of container image storage, default 15.
export STC_IAAS_GLUSTERFS_DISK=100 # Size in GB of gluster disk, default 100.
export STC_IAAS_INTERNAL_NETWORK=admin # Network name where the VMs are added, default admin.

# Please upload an RHEL Image to your openstack env.
#   openstack image create --public --disk-format qcow2 --file rhel-server-7.6-update-4-x86_64-kvm.qcow2 rhel-server-7.6-update-4-x86_64
export STC_IAAS_IMAGE="RHEL IMAGE NAME" # Default rhel-server-7.6-x86_64-kvm

cd /work
./playbooks/bb00-openstack_provisioning.yml