@@ -6,14 +6,26 @@ completing these steps or to help model your own. You are also free to create
66the required resources through other methods; the templates are just an
77example.
88
9+ ## Prerequisites
10+
11+ * all prerequisites from [ README] ( README.md )
12+ * the following binaries installed and in $PATH:
13+ * gcloud
14+ * gsutil
15+ * gcloud authenticated to an account with [ additional] ( iam.md ) roles:
16+ * Deployment Manager Editor
17+ * the following API Services enabled:
18+ * Cloud Deployment Manager V2 API (deploymentmanager.googleapis.com)
19+
920## Create Ignition configs
1021
1122The machines will be started manually. Therefore, it is required to generate
1223the bootstrap and machine Ignition configs and store them for later steps.
13- Use [ a staged install] ( ../overview.md#multiple-invocations ) to remove the
14- control-plane Machines and compute MachineSets, because we'll be providing
15- those ourselves and don't want to involve
16- [ the machine-API operator] [ machine-api-operator ] .
24+ Use a [ staged install] ( ../overview.md#multiple-invocations ) to enable desired customizations.
25+
26+ ### Create an install config
27+
28+ Create an install configuration as for [ the usual approach] ( install.md#create-configuration ) .
1729
1830``` console
1931$ openshift-install create install-config
@@ -26,29 +38,61 @@ $ openshift-install create install-config
2638? Pull Secret [? for help]
2739```
2840
29- Edit the resulting ` openshift-install.yaml ` to set ` replicas ` to 0 for the ` compute ` pool:
41+ ### Empty the compute pool (optional)
3042
31- ``` console
32- $ sed -i ' 1,/replicas: / s/replicas: .*/replicas: 0/' install-config.yaml
43+ If you do not want the cluster to provision compute machines, edit the resulting ` install-config.yaml ` to set ` replicas ` to 0 for the ` compute ` pool.
44+
45+ ``` sh
46+ python -c '
47+ import yaml;
48+ path = "install-config.yaml";
49+ data = yaml.full_load(open(path));
50+ data["compute"][0]["replicas"] = 0;
51+ open(path, "w").write(yaml.dump(data, default_flow_style=False))'
3352```
3453
35- Create manifests to get access to the control-plane Machines and compute MachineSets:
54+ ### Create manifests
55+
56+ Create manifest to enable customizations which are not exposed via the install configuration.
3657
3758``` console
3859$ openshift-install create manifests
3960INFO Consuming "Install Config" from target directory
4061```
4162
42- From the manifest assets, remove the control- plane Machines and the compute MachineSets:
63+ ### Remove control plane machines
4364
44- ``` console
45- $ rm -f openshift/99_openshift-cluster-api_master-machines-* .yaml
46- $ rm -f openshift/99_openshift-cluster-api_worker-machineset-* .yaml
65+ Remove the control plane machines from the manifests.
66+ We'll be providing those ourselves and don't want to involve [ the machine-API operator] [ machine-api-operator ] .
67+
68+ ``` sh
69+ rm -f openshift/99_openshift-cluster-api_master-machines-* .yaml
4770```
4871
49- You are free to leave the compute MachineSets in if you want to create compute
50- machines via the machine API, but if you do you may need to update the various
51- references (` subnetwork ` , etc.) to match your environment.
72+ ### Remove compute machinesets (Optional)
73+
74+ If you do not want the cluster to provision compute machines, remove the compute machinesets from the manifests as well.
75+
76+ ``` sh
77+ rm -f openshift/99_openshift-cluster-api_worker-machineset-* .yaml
78+ ```
79+
80+ ### Remove DNS Zones (Optional)
81+
82+ If you don't want [ the ingress operator] [ ingress-operator ] to create DNS records on your behalf, remove the ` privateZone ` and ` publicZone ` sections from the DNS configuration.
83+ If you do so, you'll need to [ add ingress DNS records manually] ( #add-the-ingress-dns-records-optional ) later on.
84+
85+ ``` sh
86+ python -c '
87+ import yaml;
88+ path = "manifests/cluster-dns-02-config.yml";
89+ data = yaml.full_load(open(path));
90+ del data["spec"]["publicZone"];
91+ del data["spec"]["privateZone"];
92+ open(path, "w").write(yaml.dump(data, default_flow_style=False))'
93+ ```
94+
95+ ### Create Ignition configs
5296
5397Now we can create the bootstrap Ignition configs.
5498
@@ -69,7 +113,7 @@ $ tree
69113└── worker.ign
70114```
71115
72- ### Extract infrastructure name from Ignition metadata
116+ ## Extract infrastructure name from Ignition metadata
73117
74118By default, Ignition generates a unique cluster identifier comprised of the
75119cluster name specified during the invocation of the installer and a short
@@ -94,6 +138,7 @@ export NETWORK_CIDR='10.0.0.0/16'
94138export MASTER_SUBNET_CIDR=' 10.0.0.0/19'
95139export WORKER_SUBNET_CIDR=' 10.0.32.0/19'
96140
141+ export KUBECONFIG=auth/kubeconfig
97142export CLUSTER_NAME=` jq -r .clusterName metadata.json`
98143export INFRA_ID=` jq -r .infraID metadata.json`
99144export PROJECT_NAME=` jq -r .gcp.projectID metadata.json`
@@ -496,13 +541,13 @@ Create the deployment using gcloud.
496541gcloud deployment-manager deployments create ${INFRA_ID} -worker --config 06_worker.yaml
497542```
498543
499- #### Approving the CSR requests for nodes
544+ ### Approving the CSR requests for nodes
500545
501546The CSR requests for client and server certificates for nodes joining the cluster will need to be approved by the administrator.
547+ Nodes that have not been provisioned by the cluster need their associated ` system:serviceaccount ` certificate approved to join the cluster.
502548You can view them with:
503549
504550``` console
505- $ export KUBECONFIG=./auth/kubeconfig
506551$ oc get csr
507552NAME AGE REQUESTOR CONDITION
508553csr-8b2br 15m system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Approved,Issued
@@ -520,6 +565,48 @@ CSRs can be approved by name, for example:
520565oc adm certificate approve csr-bfd72
521566```
522567
568+ ## Add the Ingress DNS Records (Optional)
569+
570+ If you removed the DNS Zone configuration [ earlier] ( #remove-dns-zones ) , you'll need to manually create some DNS records pointing at the ingress load balancer.
571+ You can create either a wildcard ` *.apps.{baseDomain}. ` or specific records (more on the specific records below).
572+ You can use A, CNAME, etc. records, as you see fit.
573+
574+ You must wait for the ingress-router to create a load balancer and populate the ` EXTERNAL-IP `
575+
576+ ``` console
577+ $ oc -n openshift-ingress get service router-default
578+ NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
579+ router-default LoadBalancer 172.30.18.154 35.233.157.184 80:32288/TCP,443:31215/TCP 98
580+ ```
581+
582+ Then create the A record to your public and private zones.
583+
584+ ``` sh
585+ export ROUTER_IP=` oc -n openshift-ingress get service router-default --no-headers | awk ' {print $4}' `
586+
587+ if [ -f transaction.yaml ]; then rm transaction.yaml; fi
588+ gcloud dns record-sets transaction start --zone ${BASE_DOMAIN_ZONE_NAME}
589+ gcloud dns record-sets transaction add ${ROUTER_IP} --name \* .apps.${CLUSTER_NAME} .${BASE_DOMAIN} . --ttl 300 --type A --zone ${BASE_DOMAIN_ZONE_NAME}
590+ gcloud dns record-sets transaction execute --zone ${BASE_DOMAIN_ZONE_NAME}
591+
592+ if [ -f transaction.yaml ]; then rm transaction.yaml; fi
593+ gcloud dns record-sets transaction start --zone ${INFRA_ID} -private-zone
594+ gcloud dns record-sets transaction add ${ROUTER_IP} --name \* .apps.${CLUSTER_NAME} .${BASE_DOMAIN} . --ttl 300 --type A --zone ${INFRA_ID} -private-zone
595+ gcloud dns record-sets transaction execute --zone ${INFRA_ID} -private-zone
596+ ```
597+
598+ If you prefer to add explicit domains instead of using a wildcard, you can create entries for each of the cluster's current routes:
599+
600+ ``` console
601+ $ oc get --all-namespaces -o jsonpath=' {range .items[*]}{range .status.ingress[*]}{.host}{"\n"}{end}{end}' routes
602+ oauth-openshift.apps.your.cluster.domain.example.com
603+ console-openshift-console.apps.your.cluster.domain.example.com
604+ downloads-openshift-console.apps.your.cluster.domain.example.com
605+ alertmanager-main-openshift-monitoring.apps.your.cluster.domain.example.com
606+ grafana-openshift-monitoring.apps.your.cluster.domain.example.com
607+ prometheus-k8s-openshift-monitoring.apps.your.cluster.domain.example.com
608+ ```
609+
523610## Monitor for cluster completion
524611
525612``` console
@@ -530,7 +617,6 @@ INFO Waiting up to 30m0s for the cluster to initialize...
530617Also, you can observe the running state of your cluster pods:
531618
532619``` console
533- $ export KUBECONFIG=./auth/kubeconfig
534620$ oc get clusterversion
535621NAME VERSION AVAILABLE PROGRESSING SINCE STATUS
536622version False True 24m Working towards 4.2.0-0.okd-2019-08-05-204819: 99% complete
@@ -584,4 +670,5 @@ openshift-service-catalog-controller-manager-operator openshift-service-catalo
584670```
585671
586672[ deploymentmanager ] : https://cloud.google.com/deployment-manager/docs
673+ [ ingress-operator ] : https://github.com/openshift/cluster-ingress-operator
587674[ machine-api-operator ] : https://github.com/openshift/machine-api-operator
0 commit comments