You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: docs/datastore/ha-embedded.md
+12-8Lines changed: 12 additions & 8 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -11,11 +11,15 @@ Embedded etcd (HA) may have performance issues on slower disks such as Raspberry
11
11
HA embedded etcd cluster must be comprised of an odd number of server nodes for etcd to maintain quorum. For a cluster with n servers, quorum is (n/2)+1. For any odd-sized cluster, adding one node will always increase the number of nodes necessary for quorum. Although adding a node to an odd-sized cluster appears better since there are more machines, the fault tolerance is worse since exactly the same number of nodes may fail without losing quorum but there are more nodes that can fail.
12
12
:::
13
13
14
+
:::note
15
+
To rapidly deploy large HA clusters, see [Related Projects](/related-projects)
16
+
:::
17
+
14
18
An HA K3s cluster with embedded etcd is composed of:
15
19
16
-
* Three or more **server nodes** that will serve the Kubernetes API and run other control plane services, as well as host the embedded etcd datastore.
17
-
* Optional: Zero or more **agent nodes** that are designated to run your apps and services
18
-
* Optional: A **fixed registration address** for agent nodes to register with the cluster
20
+
- Three or more **server nodes** that will serve the Kubernetes API and run other control plane services, as well as host the embedded etcd datastore.
21
+
- Optional: Zero or more **agent nodes** that are designated to run your apps and services
22
+
- Optional: A **fixed registration address** for agent nodes to register with the cluster
19
23
20
24
To get started, first launch a server node with the `cluster-init` flag to enable clustering and a token that will be used as a shared secret to join additional servers to the cluster.
21
25
@@ -26,6 +30,7 @@ curl -sfL https://get.k3s.io | K3S_TOKEN=SECRET sh -s - server \
26
30
```
27
31
28
32
After launching the first server, join the second and third servers to the cluster using the shared secret:
33
+
29
34
```bash
30
35
curl -sfL https://get.k3s.io | K3S_TOKEN=SECRET sh -s - server \
31
36
--server https://<ip or hostname of server1>:6443 \
@@ -48,11 +53,11 @@ Now you have a highly available control plane. Any successfully clustered server
48
53
curl -sfL https://get.k3s.io | K3S_TOKEN=SECRET sh -s - agent --server https://<ip or hostname of server>:6443
49
54
```
50
55
51
-
There are a few config flags that must be the same in all server nodes:
56
+
There are a few config flags that must be the same in all server nodes:
52
57
53
-
* Network related flags: `--cluster-dns`, `--cluster-domain`, `--cluster-cidr`, `--service-cidr`
54
-
* Flags controlling the deployment of certain components: `--disable-helm-controller`, `--disable-kube-proxy`, `--disable-network-policy` and any component passed to `--disable`
55
-
* Feature related flags: `--secrets-encryption`
58
+
- Network related flags: `--cluster-dns`, `--cluster-domain`, `--cluster-cidr`, `--service-cidr`
59
+
- Flags controlling the deployment of certain components: `--disable-helm-controller`, `--disable-kube-proxy`, `--disable-network-policy` and any component passed to `--disable`
60
+
- Feature related flags: `--secrets-encryption`
56
61
57
62
## Existing single-node clusters
58
63
@@ -63,4 +68,3 @@ Available as of [v1.22.2+k3s1](https://github.com/k3s-io/k3s/releases/tag/v1.22.
63
68
If you have an existing cluster using the default embedded SQLite database, you can convert it to etcd by simply restarting your K3s server with the `--cluster-init` flag. Once you've done that, you'll be able to add additional instances as described above.
64
69
65
70
If an etcd datastore is found on disk either because that node has either initialized or joined a cluster already, the datastore arguments (`--cluster-init`, `--server`, `--datastore-endpoint`, etc) are ignored.
Copy file name to clipboardExpand all lines: docs/datastore/ha.md
+18-12Lines changed: 18 additions & 12 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -5,12 +5,16 @@ weight: 30
5
5
6
6
This section describes how to install a high-availability K3s cluster with an external database.
7
7
8
+
:::note
9
+
To rapidly deploy large HA clusters, see [Related Projects](/related-projects)
10
+
:::
11
+
8
12
Single server clusters can meet a variety of use cases, but for environments where uptime of the Kubernetes control plane is critical, you can run K3s in an HA configuration. An HA K3s cluster is composed of:
9
13
10
-
* Two or more **server nodes** that will serve the Kubernetes API and run other control plane services
11
-
* An **external datastore** (as opposed to the embedded SQLite datastore used in single-server setups)
12
-
* Optional: Zero or more **agent nodes** that are designated to run your apps and services
13
-
* Optional: A **fixed registration address** for agent nodes to register with the cluster
14
+
- Two or more **server nodes** that will serve the Kubernetes API and run other control plane services
15
+
- An **external datastore** (as opposed to the embedded SQLite datastore used in single-server setups)
16
+
- Optional: Zero or more **agent nodes** that are designated to run your apps and services
17
+
- Optional: A **fixed registration address** for agent nodes to register with the cluster
14
18
15
19
For more details on how these components work together, refer to the [architecture section.](../architecture/architecture.md#high-availability-k3s)
16
20
@@ -19,9 +23,11 @@ For more details on how these components work together, refer to the [architectu
19
23
Setting up an HA cluster requires the following steps:
20
24
21
25
### 1. Create an External Datastore
26
+
22
27
You will first need to create an external datastore for the cluster. See the [Cluster Datastore Options](datastore.md) documentation for more details.
23
28
24
29
### 2. Launch Server Nodes
30
+
25
31
K3s requires two or more server nodes for this HA configuration. See the [Requirements](../installation/requirements.md) guide for minimum machine requirements.
26
32
27
33
When running the `k3s server` command on these nodes, you must set the `datastore-endpoint` parameter so that K3s knows how to connect to the external datastore. The `token` parameter can also be used to set a deterministic token when adding nodes. When empty, this token will be generated automatically for further use.
@@ -52,6 +58,7 @@ Once you've launched the `k3s server` process on all server nodes, ensure that t
52
58
The same example command in Step 2 can be used to join additional server nodes, where the token from the first node needs to be used.
53
59
54
60
If the first server node was started without the `--token` CLI flag or `K3S_TOKEN` variable, the token value can be retrieved from any server already joined to the cluster:
61
+
55
62
```bash
56
63
cat /var/lib/rancher/k3s/server/token
57
64
```
@@ -66,26 +73,25 @@ curl -sfL https://get.k3s.io | sh -s - server \
66
73
67
74
There are a few config flags that must be the same in all server nodes:
68
75
69
-
* Network related flags: `--cluster-dns`, `--cluster-domain`, `--cluster-cidr`, `--service-cidr`
70
-
* Flags controlling the deployment of certain components: `--disable-helm-controller`, `--disable-kube-proxy`, `--disable-network-policy` and any component passed to `--disable`
71
-
* Feature related flags: `--secrets-encryption`
76
+
- Network related flags: `--cluster-dns`, `--cluster-domain`, `--cluster-cidr`, `--service-cidr`
77
+
- Flags controlling the deployment of certain components: `--disable-helm-controller`, `--disable-kube-proxy`, `--disable-network-policy` and any component passed to `--disable`
78
+
- Feature related flags: `--secrets-encryption`
72
79
73
80
:::note
74
81
Ensure that you retain a copy of this token as it is required when restoring from backup and adding nodes. Previously, K3s did not enforce the use of a token when using external SQL datastores.
75
82
:::
76
83
77
-
78
84
### 4. Optional: Configure a Fixed Registration Address
79
85
80
86
Agent nodes need a URL to register against. This can be the IP or hostname of any server node, but in many cases those may change over time. For example, if running your cluster in a cloud that supports scaling groups, nodes may be created and destroyed over time, changing to different IPs from the initial set of server nodes. It would be best to have a stable endpoint in front of the server nodes that will not change over time. This endpoint can be set up using any number approaches, such as:
81
87
82
-
* A layer-4 (TCP) load balancer
83
-
* Round-robin DNS
84
-
* Virtual or elastic IP addresses
88
+
- A layer-4 (TCP) load balancer
89
+
- Round-robin DNS
90
+
- Virtual or elastic IP addresses
85
91
86
92
See [Cluster Loadbalancer](./cluster-loadbalancer.md) for example configurations.
87
93
88
-
This endpoint can also be used for accessing the Kubernetes API. So you can, for example, modify your [kubeconfig](https://kubernetes.io/docs/concepts/configuration/organize-cluster-access-kubeconfig/) file to point to it instead of a specific node.
94
+
This endpoint can also be used for accessing the Kubernetes API. So you can, for example, modify your [kubeconfig](https://kubernetes.io/docs/concepts/configuration/organize-cluster-access-kubeconfig/) file to point to it instead of a specific node.
89
95
90
96
To avoid certificate errors in such a configuration, you should configure the server with the `--tls-san YOUR_IP_OR_HOSTNAME_HERE` option. This option adds an additional hostname or IP as a Subject Alternative Name in the TLS cert, and it can be specified multiple times if you would like to access via both the IP and the hostname.
Copy file name to clipboardExpand all lines: docs/quick-start/quick-start.md
+7-5Lines changed: 7 additions & 5 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -13,18 +13,19 @@ For information on how K3s components work together, refer to the [architecture
13
13
New to Kubernetes? The official Kubernetes docs already have some great tutorials outlining the basics [here](https://kubernetes.io/docs/tutorials/kubernetes-basics/).
14
14
:::
15
15
16
-
Install Script
17
-
--------------
16
+
## Install Script
17
+
18
18
K3s provides an installation script that is a convenient way to install it as a service on systemd or openrc based systems. This script is available at https://get.k3s.io. To install K3s using this method, just run:
19
+
19
20
```bash
20
21
curl -sfL https://get.k3s.io | sh -
21
22
```
22
23
23
24
After running this installation:
24
25
25
-
* The K3s service will be configured to automatically restart after node reboots or if the process crashes or is killed
26
-
* Additional utilities will be installed, including `kubectl`, `crictl`, `ctr`, `k3s-killall.sh`, and `k3s-uninstall.sh`
27
-
* A [kubeconfig](https://kubernetes.io/docs/concepts/configuration/organize-cluster-access-kubeconfig/) file will be written to `/etc/rancher/k3s/k3s.yaml` and the kubectl installed by K3s will automatically use it
26
+
- The K3s service will be configured to automatically restart after node reboots or if the process crashes or is killed
27
+
- Additional utilities will be installed, including `kubectl`, `crictl`, `ctr`, `k3s-killall.sh`, and `k3s-uninstall.sh`
28
+
- A [kubeconfig](https://kubernetes.io/docs/concepts/configuration/organize-cluster-access-kubeconfig/) file will be written to `/etc/rancher/k3s/k3s.yaml` and the kubectl installed by K3s will automatically use it
28
29
29
30
A single-node server installation is a fully-functional Kubernetes cluster, including all the datastore, control-plane, kubelet, and container runtime components necessary to host workload pods. It is not necessary to add additional server or agents nodes, but you may want to do so to add additional capacity or redundancy to your cluster.
30
31
@@ -33,6 +34,7 @@ To install additional agent nodes and add them to the cluster, run the installat
33
34
```bash
34
35
curl -sfL https://get.k3s.io | K3S_URL=https://myserver:6443 K3S_TOKEN=mynodetoken sh -
35
36
```
37
+
36
38
Setting the `K3S_URL` parameter causes the installer to configure K3s as an agent, instead of a server. The K3s agent will register with the K3s server listening at the supplied URL. The value to use for `K3S_TOKEN` is stored at `/var/lib/rancher/k3s/server/node-token` on your server node.
Projects implementing the K3s distribution are welcome additions to help expand the community. This page will introduce you to a range of projects that are related to K3s and can help you further explore its capabilities and potential applications.
7
+
8
+
These projects showcase the versatility and adaptability of K3s in various environments, as well as extensions of K3s.
9
+
10
+
## Bootstrapping a Multi-Node K3s cluster via Ansible
11
+
12
+
For users seeking to bootstrap a multi-node K3s cluster, we recommend the use of an Ansible script. This approach simplifies the process of setting up a K3s cluester by automating the installation and configuration of each node.
13
+
14
+
For this, take a look at [k3s-io/k3s-ansible](https://github.com/k3s-io/k3s-ansible) repository. These scripts provides a convenient way to install K3s on your nodes, allowing you to focus on the configuration of your cluster rather than the installation process.
15
+
16
+
This approach is particularly useful for creating a High Availability (HA) Kubernetes cluster, as it can be customized to suit the specific requirements of the cluster.
0 commit comments