Skip to content

Commit

Permalink
final copyedits
Browse files Browse the repository at this point in the history
  • Loading branch information
JENNIFER RONDEAU authored and Misty Stanley-Jones committed Jun 20, 2018
1 parent b65eee2 commit 83f96dc
Showing 1 changed file with 27 additions and 27 deletions.
54 changes: 27 additions & 27 deletions content/en/docs/setup/independent/high-availability.md
Original file line number Diff line number Diff line change
Expand Up @@ -10,7 +10,9 @@ content_template: templates/task

{{% capture overview %}}

This page explains two different approaches setting up a highly available Kubernetes
{{</* feature-state for_k8s_version="v1.11" state="beta" */>}}

This page explains two different approaches to setting up a highly available Kubernetes
cluster using kubeadm:

- With stacked masters. This approach requires less infrastructure. etcd members
Expand All @@ -22,8 +24,8 @@ Your clusters must run Kubernetes version 1.11 or later.

{{< caution >}}
**Caution**: This page does not address running your cluster on a cloud provider.
In a cloud environment, neither approach documented here works with services of type
LoadBalancer, or with dynamic PersistentVolumes.
In a cloud environment, neither approach documented here works with Service objects
of type LoadBalancer, or with dynamic PersistentVolumes.
{{< /caution >}}

{{% /capture %}}
Expand All @@ -43,7 +45,7 @@ For both methods you need this infrastructure:
- SSH access from one device to all nodes in the system
- sudo privileges on all machines

For the external etcd cluster only:
For the external etcd cluster only, you also need:

- Three additional machines for etcd members

Expand Down Expand Up @@ -83,7 +85,7 @@ run as root.
ssh-add ~/.ssh/path_to_private_key
```

1. SSH between nodes to check that the connection is working properly.
1. SSH between nodes to check that the connection is working correctly.

**Notes:**

Expand Down Expand Up @@ -118,7 +120,7 @@ different configuration.

It is not recommended to use an IP address directly in a cloud environment.

The load balancer must be able to communicate with all control plane node
The load balancer must be able to communicate with all control plane nodes
on the apiserver port. It must also allow incoming traffic on its
listening port.

Expand Down Expand Up @@ -167,10 +169,10 @@ will fail the health check until the apiserver is running.

1. Run `sudo kubeadm init --config kubeadm-config.yaml`

### Copy certificates to other control plane nodes
### Copy required files to other control plane nodes

The following certificates were created when you ran `kubeadm init`. Copy these certificates
to your other control plane nodes:
The following certificates and other required files were created when you ran `kubeadm init`.
Copy these files to your other control plane nodes:

- `/etc/kubernetes/pki/ca.crt`
- `/etc/kubernetes/pki/ca.key`
Expand Down Expand Up @@ -238,8 +240,7 @@ done
# This CIDR is a calico default. Substitute or remove for your CNI provider.
podSubnet: "192.168.0.0/16"

1. Replace the following variables in the template that was just created with
values for your specific situation:
1. Replace the following variables in the template with the appropriate values for your cluster:

- `LOAD_BALANCER_DNS`
- `LOAD_BALANCER_PORT`
Expand All @@ -248,7 +249,7 @@ done
- `CP1_HOSTNAME`
- `CP1_IP`

1. Move the copied certificates to the proper locations
1. Move the copied files to the correct locations:

```sh
USER=ubuntu # customizable
Expand All @@ -264,7 +265,7 @@ done
mv /home/${USER}/admin.conf /etc/kubernetes/admin.conf
```

1. Run the kubeadm phase commands to bootstrap the kubelet
1. Run the kubeadm phase commands to bootstrap the kubelet:

```sh
kubeadm alpha phase certs all --config kubeadm-config.yaml
Expand Down Expand Up @@ -330,8 +331,7 @@ done
# This CIDR is a calico default. Substitute or remove for your CNI provider.
podSubnet: "192.168.0.0/16"

1. Replace the following variables in the template that was just created with
values for your specific situation:
1. Replace the following variables in the template with the appropriate values for your cluster:

- `LOAD_BALANCER_DNS`
- `LOAD_BALANCER_PORT`
Expand All @@ -342,7 +342,7 @@ done
- `CP2_HOSTNAME`
- `CP2_IP`

1. Move the copied certificates to the proper locations:
1. Move the copied files to the correct locations:

```sh
USER=ubuntu # customizable
Expand All @@ -368,7 +368,7 @@ done
systemctl start kubelet
```

1. Run the commands to add the node to the etcd cluster
1. Run the commands to add the node to the etcd cluster:

```sh
CP0_IP=10.0.0.7
Expand All @@ -380,7 +380,7 @@ done
kubeadm alpha phase etcd local --config kubeadm-config.yaml
```

1. Deploy the control plane components and mark the node as a master
1. Deploy the control plane components and mark the node as a master:

```sh
kubeadm alpha phase kubeconfig all --config kubeadm-config.yaml
Expand All @@ -395,10 +395,10 @@ done
- Follow [these instructions](/docs/tasks/administer-cluster/setup-ha-etcd-with-kubeadm/)
to set up the etcd cluster.

### Copy certificates to other control plane nodes
### Copy required files to other control plane nodes

The following certificates were created when you created the cluster. Copy these
certificates to your other control plane nodes:
The following certificates were created when you created the cluster. Copy them
to your other control plane nodes:

- `/etc/kubernetes/pki/etcd/ca.crt`
- `/etc/kubernetes/pki/apiserver-etcd-client.crt`
Expand Down Expand Up @@ -451,10 +451,10 @@ for your environment.

1. Run `kubeadm init --config kubeadm-config.yaml`

### Copy certificates
### Copy required files to the correct locations

The following certificates were created when you ran `kubeadm init`. Copy these certificates
to your other control plane nodes:
The following certificates and other required files were created when you ran `kubeadm init`.
Copy these files to your other control plane nodes:

- `/etc/kubernetes/pki/ca.crt`
- `/etc/kubernetes/pki/ca.key`
Expand All @@ -463,8 +463,8 @@ to your other control plane nodes:
- `/etc/kubernetes/pki/front-proxy-ca.crt`
- `/etc/kubernetes/pki/front-proxy-ca.key`

In the following example, replace
`CONTROL_PLANE_IP` with the IP addresses of the other control plane nodes.
In the following example, replace the list of
`CONTROL_PLANE_IP` values with the IP addresses of the other control plane nodes.

```sh
USER=ubuntu # customizable
Expand All @@ -485,7 +485,7 @@ In the following example, replace

### Set up the other control plane nodes

Verify the location of the certificates.
Verify the location of the copied files.
Your `/etc/kubernetes` directory should look like this:

- `/etc/kubernetes/pki/apiserver-etcd-client.crt`
Expand Down

0 comments on commit 83f96dc

Please sign in to comment.