Skip to content

集群部署利器 KubeKey

pixiake edited this page Aug 3, 2023 · 11 revisions

简介

KubeKey 是由 KubeSphere 社区开源的一款高效、灵活、可扩展且功能丰富的 Kubernetes 集群部署及管理工具。

KubeKey 已通过 CNCF kubernetes 一致性认证

功能清单

集群生命周期管理

  • 集群部署
  • 集群删除
  • 添加节点
  • 删除节点
  • 集群升级
  • 证书有效期检查
  • 证书更新

扩展功能

  • KubeSphere 部署
  • 容器镜像仓库部署 (支持 Harbor 和 registry)
  • 操作系统初始化 (安装依赖、配置时间同步、执行自定义脚本等)

辅助功能

  • etcd 备份
  • 证书自动更新
  • 支持自定义制作离线安装包
  • 支持插件扩展

计划功能(RoadMap)

v3.1

2023 Q3

  • 支持单独升级 etcd 版本
  • 升级迁移 kubeadm 配置版本,支持 kubernetes v1.27+
  • 支持使用 cri-docker 作为容器运行时
  • 支持 CRI 容器运行时迁移(如 docker -> containerd)

v3.2

2024 Q1

  • 支持 agent 模式覆盖集群运行中自动化运维(如配置文件变更、服务重启等运维操作)

技术原理

KubeKey 采用 ssh 实现多节点任务分发与配置,在此之上抽象出 Action、Task、Module、Pipeline 这四类对象,从而实现一个基于Go语言的多节点任务编排框架,可自由实现任务定义、任务编排以及任务管理等。KubeKey 中所包含的功能,均基于该核心的任务编排框架实现。用户也可以根据自定义需求利用该任务编排框架自有扩展kubekey或开发自己的定制化项目。

image

  • Action: 基础执行单元,表示在一台节点上执行一次具体任务(cmd、template、scp、fetch)。
  • Task:Action 管理单元,定义调度执行策略,例如:节点调度,重试次数,是否并行执行。
  • Module:Tasks 集合,是一个具有完整功能的模块。
  • Pipeline:Modules 集合,顺序执行 Modules 流程。KubeKey 中每个功能均被定义为一个 Pipeline 。

使用

KubeKey 支持 命令行 以及 operator(基于 cluster-api) 两种使用方法。

cluster-api 是 Kubernetes 社区发起,利用 Kubernetes 对象来创建和管理集群及基础设施的开源项目及标准。其中包含以下组成部分:

  • Cluster API Provider:提供特定类型基础设施的实现,如AWS、Azure、GCP等。KubeKey 为 cluster 开发贡献了基于 ssh 的 Provider。
  • Bootstrap Provider:用于创建和管理Kubernetes集群节点的引导程序。
  • Control Plane Provider:用于创建和管理Kubernetes控制平面节点的提供程序。

KubeKey Operator 模式处于实验性开发阶段,如有兴趣了解,可参考:

本文主要介绍命令行方式的使用

下载

curl -sSL https://get-kk.kubesphere.io | sh -

配置清单

kubekey 配置文件模版可通过 kk create config 创建,为用户自定义部署集群提供了丰富的配置,其中主要包含:

  • 节点信息配置,用于配置节点的ssh连接信息以及节点label预设值信息
  • 节点角色配置,用于配置节点在集群中的角色
  • 操作系统初始化自定义配置,用于初始化操作系统配置,如时间同步,执行自定义脚本等
  • Etcd 自定义配置,用于配置自定义etcd参数
  • Kubernetes 集群控制平面配置,用于配置控制平面负载均衡方式及相关信息
  • Kubernetes 集群自定义配置,用于自定义集群配置,如版本、相关组件参数等
  • Kubernetes 集群网络配置,用于配置集群网络相关参数,如pod网络,service网络,网络查件参数等
  • 镜像仓库配置,用于配置私有仓库信息
  • 插件安装配置,支持配置插件同集群一起安装

详细配置说明可参考:

apiVersion: kubekey.kubesphere.io/v1alpha2
kind: Cluster
metadata:
  name: sample
spec:
  hosts:
  # Assume that the default port for SSH is 22. Otherwise, add the port number after the IP address. 
  # If you install Kubernetes on ARM, add "arch: arm64". For example, {...user: ubuntu, password: Qcloud@123, arch: arm64}.
  - {name: node1, address: 172.16.0.2, internalAddress: 172.16.0.2, port: 8022, user: ubuntu, password: "Qcloud@123"}
  # For default root user.
  # Kubekey will parse `labels` field and automatically label the node.
  - {name: node2, address: 172.16.0.3, internalAddress: 172.16.0.3, password: "Qcloud@123", labels: {disk: SSD, role: backend}}
  # For password-less login with SSH keys.
  - {name: node3, address: 172.16.0.4, internalAddress: 172.16.0.4, privateKeyPath: "~/.ssh/id_rsa"}
  roleGroups:
    etcd:
    - node1 # All the nodes in your cluster that serve as the etcd nodes.
    master:
    - node1
    - node[2:10] # From node2 to node10. All the nodes in your cluster that serve as the master nodes.
    worker:
    - node1
    - node[10:100] # All the nodes in your cluster that serve as the worker nodes.
  controlPlaneEndpoint:
    # Internal loadbalancer for apiservers. Support: haproxy, kube-vip [Default: ""]
    internalLoadbalancer: haproxy
    # Determines whether to use external dns to resolve the control-plane domain. 
    # If 'externalDNS' is set to 'true', the 'address' needs to be set to "".
    externalDNS: false  
    domain: lb.kubesphere.local
    # The IP address of your load balancer. If you use internalLoadblancer in "kube-vip" mode, a VIP is required here.
    address: ""      
    port: 6443
  system:
    # The ntp servers of chrony.
    ntpServers:
      - time1.cloud.tencent.com
      - ntp.aliyun.com
      - node1 # Set the node name in `hosts` as ntp server if no public ntp servers access.
    timezone: "Asia/Shanghai"
    # Specify additional packages to be installed. The ISO file which is contained in the artifact is required.
    rpms:
      - nfs-utils
    # Specify additional packages to be installed. The ISO file which is contained in the artifact is required.
    debs: 
      - nfs-common
    #preInstall:  # Specify custom init shell scripts for each nodes, and execute according to the list order at the first stage.
    #  - name: format and mount disk  
    #    bash: /bin/bash -x setup-disk.sh
    #    materials: # scripts can has some dependency materials. those will copy to the node        
    #      - ./setup-disk.sh # the script which shell execute need
    #      -  xxx            # other tools materials need by this script
    #postInstall: # Specify custom finish clean up shell scripts for each nodes after the Kubernetes install.
    #  - name: clean tmps files
    #    bash: |
    #       rm -fr /tmp/kubekey/*
    #skipConfigureOS: true # Do not pre-configure the host OS (e.g. kernel modules, /etc/hosts, sysctl.conf, NTP servers, etc). You will have to set these things up via other methods before using KubeKey.

  kubernetes:
    #kubelet start arguments
    #kubeletArgs:
      # Directory path for managing kubelet files (volume mounts, etc).
    #  - --root-dir=/var/lib/kubelet
    version: v1.21.5
    # Optional extra Subject Alternative Names (SANs) to use for the API Server serving certificate. Can be both IP addresses and DNS names.
    apiserverCertExtraSans:  
      - 192.168.8.8
      - lb.kubespheredev.local
    # Container Runtime, support: containerd, cri-o, isula. [Default: docker]
    containerManager: docker
    clusterName: cluster.local
    # Whether to install a script which can automatically renew the Kubernetes control plane certificates. [Default: false]
    autoRenewCerts: true
    # masqueradeAll tells kube-proxy to SNAT everything if using the pure iptables proxy mode. [Default: false].
    masqueradeAll: false
    # maxPods is the number of Pods that can run on this Kubelet. [Default: 110]
    maxPods: 110
    # podPidsLimit is the maximum number of PIDs in any pod. [Default: 10000]
    podPidsLimit: 10000
    # The internal network node size allocation. This is the size allocated to each node on your network. [Default: 24]
    nodeCidrMaskSize: 24
    # Specify which proxy mode to use. [Default: ipvs]
    proxyMode: ipvs
    # enable featureGates, [Default: {"ExpandCSIVolumes":true,"RotateKubeletServerCertificate": true,"CSIStorageCapacity":true, "TTLAfterFinished":true}]
    featureGates: 
      CSIStorageCapacity: true
      ExpandCSIVolumes: true
      RotateKubeletServerCertificate: true
      TTLAfterFinished: true
    ## support kata and NFD
    # kata:
    #   enabled: true
    # nodeFeatureDiscovery
    #   enabled: true
    # additional kube-proxy configurations
    kubeProxyConfiguration:
      ipvs:
        # CIDR's to exclude when cleaning up IPVS rules.
        # necessary to put node cidr here when internalLoadbalancer=kube-vip and proxyMode=ipvs
        # refer to: https://github.com/kubesphere/kubekey/issues/1702
        excludeCIDRs:
          - 172.16.0.2/24
  etcd:
    # Specify the type of etcd used by the cluster. When the cluster type is k3s, setting this parameter to kubeadm is invalid. [kubekey | kubeadm | external] [Default: kubekey]
    type: kubekey  
    ## The following parameters need to be added only when the type is set to external.
    ## caFile, certFile and keyFile need not be set, if TLS authentication is not enabled for the existing etcd.
    # external:
    #   endpoints:
    #     - https://192.168.6.6:2379
    #   caFile: /pki/etcd/ca.crt
    #   certFile: /pki/etcd/etcd.crt
    #   keyFile: /pki/etcd/etcd.key
    dataDir: "/var/lib/etcd"
    # Time (in milliseconds) of a heartbeat interval.
    heartbeatInterval: 250
    # Time (in milliseconds) for an election to timeout. 
    electionTimeout: 5000
    # Number of committed transactions to trigger a snapshot to disk.
    snapshotCount: 10000
    # Auto compaction retention for mvcc key value store in hour. 0 means disable auto compaction.
    autoCompactionRetention: 8
    # Set level of detail for etcd exported metrics, specify 'extensive' to include histogram metrics.
    metrics: basic
    ## Etcd has a default of 2G for its space quota. If you put a value in etcd_memory_limit which is less than
    ## etcd_quota_backend_bytes, you may encounter out of memory terminations of the etcd cluster. Please check
    ## etcd documentation for more information.
    # 8G is a suggested maximum size for normal environments and etcd warns at startup if the configured value exceeds it.
    quotaBackendBytes: 2147483648 
    # Maximum client request size in bytes the server will accept.
    # etcd is designed to handle small key value pairs typical for metadata.
    # Larger requests will work, but may increase the latency of other requests
    maxRequestBytes: 1572864
    # Maximum number of snapshot files to retain (0 is unlimited)
    maxSnapshots: 5
    # Maximum number of wal files to retain (0 is unlimited)
    maxWals: 5
    # Configures log level. Only supports debug, info, warn, error, panic, or fatal.
    logLevel: info
  network:
    plugin: calico
    calico:
      ipipMode: Always  # IPIP Mode to use for the IPv4 POOL created at start up. If set to a value other than Never, vxlanMode should be set to "Never". [Always | CrossSubnet | Never] [Default: Always]
      vxlanMode: Never  # VXLAN Mode to use for the IPv4 POOL created at start up. If set to a value other than Never, ipipMode should be set to "Never". [Always | CrossSubnet | Never] [Default: Never]
      vethMTU: 0  # The maximum transmission unit (MTU) setting determines the largest packet size that can be transmitted through your network. By default, MTU is auto-detected. [Default: 0]
    kubePodsCIDR: 10.233.64.0/18
    kubeServiceCIDR: 10.233.0.0/18
  storage:
    openebs:
      basePath: /var/openebs/local # base path of the local PV provisioner
  registry:
    registryMirrors: []
    insecureRegistries: []
    privateRegistry: ""
    namespaceOverride: ""
    auths: # if docker add by `docker login`, if containerd append to `/etc/containerd/config.toml`
      "dockerhub.kubekey.local":
        username: "xxx"
        password: "***"
        skipTLSVerify: false # Allow contacting registries over HTTPS with failed TLS verification.
        plainHTTP: false # Allow contacting registries over HTTP.
        certsPath: "/etc/docker/certs.d/dockerhub.kubekey.local" # Use certificates at path (*.crt, *.cert, *.key) to connect to the registry.
  addons: [] # You can install cloud-native addons (Chart or YAML) by using this field.

命令清单

以下为 KubeKey 所支持的命令列表,详细使用方法可点击连接查看。

命令 描述
kk add nodes 为集群添加节点
kk artifact export 根据集群清单制作并导出离线安装包
kk artifact images push 将离线安装包中镜像推送至指定仓库
kk artifact import 导入(解压)离线安装包
kk certs check-expiration 检查集群证书有效期
kk certs renew 更新集群证书
kk create cluster 创建集群
kk create config 创建集群配置文件
kk create manifest 生成集群清单(组件及镜像列表)
kk alpha create phase 分阶段创建集群
kk cri migrate 变更集群节点cri容器运行时 (试验)
kk delete cluster 删除集群
kk delete node 删除集群节点
kk init os 为集群节点安装操作系统依赖
kk init registry 创建镜像仓库
kk plugin list 列出kubekey插件
kk alpha upgade phase 分阶段升级集群版本
kk upgade 升级集群版本
kk version 版本信息展示

常用运维信息

KubeKey 使用 kubeadm 进行集群生命周期管理,因此集群部署目录、组件启动方式与 kubeadm 管理集群一致。

常用运维命令

查看 kubelet 运行日志: journalctl -f -u kubelet

查看 etcd 运行日志: journalctl -f -u etcd

重启 etcd: systemctl restart etcd

重启 docker: systemctl restart docker

重启 kubelet: systemctl restart kubelet

重启 kube-apiserver: docker ps -af name=k8s_kube-apiserver* -q | xargs --no-run-if-empty docker rm -f

重启 kube-scheduler: docker ps -af name=k8s_kube-scheduler* -q | xargs --no-run-if-empty docker rm -f

重启 kube-controller-manager: docker ps -af name=k8s_kube-controller-manager* -q | xargs --no-run-if-empty docker rm -f

关键目录

组件二进制运行目录: /usr/local/bin (kubelet / kubeadm / kubectl / helm / etcd)

组件 systemd 配置目录: /etc/systemd/system (kubelet / docker / etcd)

cni 配置目录: /etc/cni/net.d

cni 二进制文件目录: /opt/cni/bin

etcd 配置文件: /etc/etcd.env

etcd 证书目录: /etc/ssl/etcd/ssl

etcd 数据目录: /var/lib/etcd

etcd 备份目录: /var/backups/kube_etcd

docker 运行配置文件: /etc/docker/daemon.json

docker 数据目录: /var/lib/docker

kubelet 数据目录: /var/lib/kubelet

kubelet 配置文件: /var/lib/kubelet/config.yaml /var/lib/kubelet/kubeadm-flags.env

kubernetes 证书目录: /etc/kubernets/pki

static Pod 目录: /etc/kubernets/manifests