k0s is packaged as a single binary, which includes all the needed components. All the binaries are statically linked which means that in typical use cases there's an absolute minimum of external runtime dependencies.
However, depending on the node role and cluster configuration, some of the underlying components may have specific dependencies, like OS level tools, packages and libraries. This page aims to provide a comprehensive overview.
The following command checks for known requirements on a host (currently only available on Linux):
k0s sysinfo
Whenever k0s is run in a multi-node setup (i.e. the --single
command line flag
isn't used), k0s requires a machine ID: a unique host identifier that is
somewhat stable across reboots. For Linux, this ID is read from the files
/var/lib/dbus/machine-id
or /etc/machine-id
. For Windows, it's taken from
the registry key HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Cryptography\MachineGuid
.
If neither of the OS specific sources yield a result, k0s will fallback to use a
machine ID based on the hostname.
When running k0s on top of virtualized or containerized environments, you need to ensure that hosts get their own unique IDs, even if they have been created from the same image.
Needless to say, as k0s operates Kubernetes worker nodes, there's a certain number of needed Linux kernel modules and configurations that we need in the system. This basically stems from the need to run both containers and also be able to set up networking for the containers.
The needed kernel configuration items are listed below. All of them are available in Kernel versions 4.3 and above. If running on older kernels, check if the distro in use has backported some features; nevertheless, it might meet the requirements. k0s will check the Linux kernel release as part of its pre-flight checks and issue a warning if it's below 3.10.
The list covers ONLY the k0s/kubernetes components’ needs on worker nodes. Your own workloads may require more.
CONFIG_CGROUPS
: Control Group supportCONFIG_CGROUP_FREEZER
: Freezer cgroup subsystemCONFIG_CGROUP_PIDS
: PIDs cgroup subsystem
kubernetes/kubeadm#2335 (comment)CONFIG_CGROUP_DEVICE
: Device controller for cgroupsCONFIG_CPUSETS
: Cpuset supportCONFIG_CGROUP_CPUACCT
: Simple CPU accounting cgroup subsystemCONFIG_MEMCG
: Memory Resource Controller for Control Groups- (optional)
CONFIG_CGROUP_HUGETLB
: HugeTLB Resource Controller for Control Groups
kubernetes/kubeadm#2335 (comment) CONFIG_CGROUP_SCHED
: Group CPU schedulerCONFIG_FAIR_GROUP_SCHED
: Group scheduling for SCHED_OTHER
kubernetes/kubeadm#2335 (comment)- (optional)
CONFIG_CFS_BANDWIDTH
: CPU bandwidth provisioning for FAIR_GROUP_SCHED
Required if CPU CFS quota enforcement is enabled for containers that specify CPU limits (--cpu-cfs-quota
).
- (optional)
- (optional)
CONFIG_BLK_CGROUP
: Block IO controller
kubernetes/kubernetes#92287 (comment)
CONFIG_NAMESPACES
: Namespaces supportCONFIG_UTS_NS
: UTS namespaceCONFIG_IPC_NS
: IPC namespaceCONFIG_PID_NS
: PID namespaceCONFIG_NET_NS
: Network namespace
CONFIG_NET
: Networking supportCONFIG_INET
: TCP/IP networkingCONFIG_NETFILTER
: Network packet filtering framework (Netfilter)- (optional)
CONFIG_NETFILTER_ADVANCED
: Advanced netfilter configuration CONFIG_NETFILTER_XTABLES
: Netfilter Xtables supportCONFIG_NETFILTER_XT_TARGET_REDIRECT
: REDIRECT target supportCONFIG_NETFILTER_XT_MATCH_COMMENT
: "comment" match support
- (optional)
CONFIG_EXT4_FS
: The Extended 4 (ext4) filesystemCONFIG_PROC_FS
: /proc file system support
Note: As part of its pre-flight checks, k0s will try to inspect and validate
the kernel configuration. In order for that to succeed, the configuration needs
to be accessible at runtime. There are some typical places that k0s will check.
A bullet-proof way to ensure the accessibility is to enable
CONFIG_IKCONFIG_PROC
,
and, if enabled as a module, to load the configs
module: modprobe configs
.
Both cgroup v1 and cgroup v2 are supported.
Required cgroup controllers:
- cpu
- cpuacct
- cpuset
- memory
- devices
- freezer
- pids
Optional cgroup controllers:
- hugetlb (kubernetes/kubeadm#2335 (comment))
- blkio (kubernetes/kubernetes#92287 (comment))
containerd and cri-o will use blkio to track disk I/O and throttling in both cgroup v1 and v2.
There are a few external tools that may be needed or used under specific circumstances:
In order to use containerd in conjunction with AppArmor, it must be enabled in
the kernel and the /sbin/apparmor_parser
executable must be installed on the
host, otherwise containerd will disable AppArmor support.
iptables may be executed to detect if there are any existing iptables rules and if those are in legacy of nft mode. If iptables is not found, k0s will assume that there are no pre-existing iptables rules.
During k0s install
the external tool useradd
will be used on the controllers
to create system user accounts for k0s. If this does exist it will fall-back to
busybox's adduser
.
k0s reset
will execute either userdel
or deluser
to clean up system user
accounts.
On k0s worker will modprobe
be executed to load missing kernel modules if they
are not detected.
External /usr/bin/id
will be executed as a fallback if local user lookup
fails, in case NSS is used.
-
up until k0s v1.21.9+k0s.0:
iptables
Required for worker nodes. Resolved by @ncopa in #1046 by addingiptables
and friends to k0s's embedded binaries. -
up until k0s v1.21.7+k0s.0:
find
,du
andnice
Required for worker nodes. Resolved upstream by @ncopa in kubernetes/kubernetes#96115, contained in Kubernetes 1.21.8 (5b13c8f68d4) and 1.22.0 (d45ba645a8f).
TBD.