This article provides users with the instructions to create and launch a K3s cluster on a Google Compute Engine instance, and to add nodes for an existing K3s cluster on GCE. In addition, this article provides guidance of advanced usages of running K3s on GCE, such as setting up private registry, and enabling UI components.
To ensure that GCE instances can be created and accessed successfully, please follow the instructions below.
Configure the following environment variables for the host on which you are running autok3s
.
export GOOGLE_SERVICE_ACCOUNT_FILE='<service-account-file-path>'
export GOOGLE_SERVICE_ACCOUNT='<service-account-name>'
Please refer here for more Service Account settings.
Please make sure your service account has permission to specified project and compute resource permission.
The GCE instances need to apply the following minimum Security Group Rules:
Rule Protocol Port Source Description
InBound TCP 22 ALL SSH Connect Port
InBound TCP 6443 K3s agent nodes Kubernetes API
InBound TCP 10250 K3s server & agent Kubelet
InBound UDP 8472 K3s server & agent (Optional) Required only for Flannel VXLAN
InBound TCP 2379,2380 K3s server nodes (Optional) Required only for embedded ETCD
OutBound ALL ALL ALL Allow All
Please use autok3s create
command to create a cluster in your GCE instance.
The following command uses google as cloud provider, creates a K3s cluster named "myk3s", and assign it with 1 master node and 1 worker node:
autok3s -d create -p google --name myk3s --master 1 --worker 1 --project <your-project>
Please use one of the following commands to create an HA cluster.
The following command uses google as cloud provider, creates an HA K3s cluster named "myk3s", and assigns it with 3 master nodes.
autok3s -d create -p google --name myk3s --master 3 --cluster --project <your-project>
The following requirements must be met before creating an HA K3s cluster with an external database:
- The number of master nodes in this cluster must be greater or equal to 1.
- The external database information must be specified within
--datastore "PATH"
parameter.
In the example below, --master 2
specifies the number of master nodes to be 2, --datastore "PATH"
specifies the external database information. As a result, requirements listed above are met.
Run the command below and create an HA K3s cluster with an external database:
autok3s -d create -p google --name myk3s --master 2 --datastore "mysql://<user>:<password>@tcp(<ip>:<port>)/<db>"
The AutoK3s supports more advanced settings to customize your K3s cluster.
If you want to add more installation environments, please set the args below:
--install-env INSTALL_K3S_SKIP_SELINUX_RPM=true --install-env INSTALL_K3S_FORCE_RESTART=true
We recommend you to only use INSTALL_* parameters for this case because this is a global setting to your K3s cluster. If you want to set the K3S_* environments, please use the K3s configuration file args.
In addition to configuring K3s with environment variables and CLI arguments, K3s can also use a config file.
If you want to do more customize and complex configurations for your K3s cluster, such as etcd snapshot or datastore. This arg is what you need.
Here's an example of a K3s server configuration with etcd snapshot information and change the node port range.
etcd-snapshot-schedule-cron: "* * * * *"
etcd-snapshot-retention: 15
service-node-port-range: "20000-30000"
Save this yaml file to your local path, such as myk3s-server-config.yaml
. Then pass this file by the following arg:
--server-config-file /your/path/myk3s-server-config.yaml
If you want to set the configuration file to your agent node, use the arg --agent-config-file /your/path/agent-config.yaml
Please use autok3s join
command to add one or more nodes for an existing K3s cluster.
The command below shows how to add a worker node for an existing K3s cluster named "myk3s".
autok3s -d join -p google --name myk3s --worker 1
The commands to add one or more nodes for an existing HA K3s cluster varies based on the types of HA cluster. Please choose one of the following commands to run.
autok3s -d join -p google --name myk3s --master 2 --worker 1
This command will delete a k3s cluster named "myk3s".
autok3s -d delete -p google --name myk3s
This command will list the clusters that you have created on this machine.
autok3s list
NAME REGION PROVIDER STATUS MASTERS WORKERS VERSION
myk3s asia-northeast1 google Running 1 0 v1.20.2+k3s1
This command will show detail information of a specified cluster, such as instance status, node IP, kubelet version, etc.
autok3s describe -n <clusterName> -p google
Note:There will be multiple results if using the same name to create with different providers, please use
-p <provider>
to choose a specified cluster. i.e.autok3s describe cluster myk3s -p google
Name: myk3s
Provider: google
Region: asia-northeast1
Zone: asia-northeast1-b
Master: 1
Worker: 0
Status: Running
Version: v1.20.2+k3s1
Nodes:
- internal-ip: [x.x.x.x]
external-ip: [x.x.x.x]
instance-status: RUNNING
instance-id: xxxxxxxx
roles: control-plane,master
status: Ready
hostname: xxxxxxxx
container-runtime: containerd://1.4.3-k3s1
version: v1.20.2+k3s1
After the cluster is created, autok3s
will automatically merge the kubeconfig
so that you can access the cluster.
autok3s kubectl config use-context myk3s.asia-northeast1.google
autok3s kubectl <sub-commands> <flags>
In the scenario of multiple clusters, the access to different clusters can be completed by switching context.
autok3s kubectl config get-contexts
autok3s kubectl config use-context <context>
Login to a specific k3s cluster node via ssh, i.e. myk3s.
autok3s ssh --provider google --name myk3s
The following command will help you to upgrade your K3s cluster version to latest version.
autok3s upgrade --provider google --name myk3s --k3s-channel latest
If you want to upgrade K3s cluster to a specified version, you can use --k3s-version
to overrides --k3s-channel
.
autok3s upgrade --provider google --name myk3s --k3s-version v1.22.4+k3s1
More usage details please running autok3s <sub-command> --provider google --help
commands.
We integrate some advanced components such as private registries and UI, related to the current provider.
Below are examples showing how you may configure /etc/autok3s/registries.yaml
on your current node when using TLS, and make it take effect on k3s cluster by autok3s
.
mirrors:
docker.io:
endpoint:
- "https://mycustomreg.com:5000"
configs:
"mycustomreg:5000":
auth:
username: xxxxxx # this is the registry username
password: xxxxxx # this is the registry password
tls:
cert_file: # path to the cert file used in the registry
key_file: # path to the key file used in the registry
ca_file: # path to the ca file used in the registry
When running autok3s create
or autok3s join
command, it will take effect with the--registry /etc/autok3s/registries.yaml
flag, i.e:
autok3s -d create \
--provider google \
--name myk3s \
--master 1 \
--worker 1 \
--registry /etc/autok3s/registries.yaml
Will enable gcp-cloud-provider for K3s
autok3s -d create -p google \
... \
--cloud-controller-manager
AutoK3s support cnrancher/kube-explorer as UI Component.
You can enable kube-explorer using following command.
autok3s explorer --context myk3s.asia-northeast1.google --port 9999
You can enable kube-explorer when creating K3s Cluster by UI.
You can also enable/disable kube-explorer any time from UI, and access kube-explorer dashboard by Explorer
button.
You can enable helm-dashboard using the following command.
autok3s helm-dashboard --port 8888
After the server started success, you can access the helm-dashboard by http://127.0.0.1:8888
You can also enable/disable helm-dashboard any time from UI, and access helm-dashboard by dashboard
button.
PS: You can only enable helm-dashboard when you have a cluster at least.