When deployed on OpenStack, {product-title} can be configured to access the OpenStack infrastructure, including using OpenStack Cinder volumes as persistent storage for application data.
Configuring OpenStack for {product-title} requires the following role:
- Member
-
For creating assets such as instances, networking ports, floating IPs, volumes, and so on. You need the member role for the tenant.
When installing {product-title} on OpenStack, ensure that you set up the appropriate security groups. install_config/topics/configuring_a_security_group.adoc
To set the required OpenStack variables, create a /etc/cloud.conf file with the following contents on all of your {product-title} hosts, both masters and nodes:
[Global] auth-url = <OS_AUTH_URL> username = <OS_USERNAME> password = <password> domain-id = <OS_USER_DOMAIN_ID> tenant-id = <OS_TENANT_ID> region = <OS_REGION_NAME> [LoadBalancer] subnet-id = <UUID of the load balancer subnet>
Consult your OpenStack administrators for values of the OS_
variables, which
are commonly used in OpenStack configuration.
You can set an OpenStack configuration on your {product-title} master and node hosts in two different ways:
-
Manually, by modifying the master-config.yaml and node-config.yaml files.
During advanced installations, OpenStack can be configured using the following parameters, which are configurable in the inventory file:
-
openshift_cloudprovider_kind
-
openshift_cloudprovider_openstack_auth_url
-
openshift_cloudprovider_openstack_username
-
openshift_cloudprovider_openstack_password
-
openshift_cloudprovider_openstack_domain_id
-
openshift_cloudprovider_openstack_domain_name
-
openshift_cloudprovider_openstack_tenant_id
-
openshift_cloudprovider_openstack_tenant_name
-
openshift_cloudprovider_openstack_region
-
openshift_cloudprovider_openstack_lb_subnet_id
# Cloud Provider Configuration # # Note: You may make use of environment variables rather than store # sensitive configuration within the ansible inventory. # For example: #openshift_cloudprovider_openstack_username="{{ lookup('env','USERNAME') }}" #openshift_cloudprovider_openstack_password="{{ lookup('env','PASSWORD') }}" # # Openstack #openshift_cloudprovider_kind=openstack #openshift_cloudprovider_openstack_auth_url=http://openstack.example.com:35357/v2.0/ #openshift_cloudprovider_openstack_username=username #openshift_cloudprovider_openstack_password=password #openshift_cloudprovider_openstack_domain_id=domain_id #openshift_cloudprovider_openstack_domain_name=domain_name #openshift_cloudprovider_openstack_tenant_id=tenant_id #openshift_cloudprovider_openstack_tenant_name=tenant_name #openshift_cloudprovider_openstack_region=region #openshift_cloudprovider_openstack_lb_subnet_id=subnet_id
Edit or
create the
master configuration file on all masters
(/etc/origin/master/master-config.yaml by default) and update the
contents of the apiServerArguments
and controllerArguments
sections:
kubernetesMasterConfig:
...
apiServerArguments:
cloud-provider:
- "openstack"
cloud-config:
- "/etc/cloud.conf"
controllerArguments:
cloud-provider:
- "openstack"
cloud-config:
- "/etc/cloud.conf"
Important
|
When triggering a containerized installation, only the directories of /etc/origin and /var/lib/origin are mounted to the master and node container. Therefore, cloud.conf should be in /etc/origin/ instead of /etc/. |
Edit or
create
the node configuration file on all nodes (/etc/origin/node/node-config.yaml
by default) and update the contents of the kubeletArguments
and nodeName
sections:
nodeName:
<instance_name> (1)
kubeletArguments:
cloud-provider:
- "openstack"
cloud-config:
- "/etc/cloud.conf"
-
The RFC1123-compliant OpenStack instance name of the node host.
Note
|
If the |
Important
|
When triggering a containerized installation, only the directories of /etc/origin and /var/lib/origin are mounted to the master and node container. Therefore, cloud.conf should be in /etc/origin/ instead of /etc/. |
Administrators can configure zone labels for dynamically created OpenStack PVs. This option is useful if the OpenStack Cinder zone name does not match the compute zone names, for example, if there is only one Cinder zone and many compute zones. Administrators can create Cinder volumes dynamically and then check the labels.
To view the zone labels for the PVs:
# oc get pv --show-labels NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE LABELS pvc-1faa6f93-64ac-11e8-930c-fa163e3c373c 1Gi RWO Delete Bound openshift-node/pvc1 standard 12s failure-domain.beta.kubernetes.io/zone=nova
The default setting is enabled. Using the oc get pv --show-labels
command returns the failure-domain.beta.kubernetes.io/zone=nova
label.
To disable the zone label, update the cloud.conf file by adding:
[BlockStorage] ignore-volume-az = yes
The PVs created after restarting the master services will not have the zone label.