Skip to content
This repository has been archived by the owner on Jun 20, 2024. It is now read-only.

Find a way to install weave-kube alongside rkt/hyperkube #2613

Open
awh opened this issue Nov 8, 2016 · 7 comments
Open

Find a way to install weave-kube alongside rkt/hyperkube #2613

awh opened this issue Nov 8, 2016 · 7 comments

Comments

@awh
Copy link
Contributor

awh commented Nov 8, 2016

From @bboreham on October 13, 2016 10:38

TL;DR: hyperkube sets up its CNI files inside a container, while weave-kube mounts directories from the host and installs Weave Net CNI files into those. K8s needs one directory that has the full set of files.

Full detail from user:

"

  • We run K8S on CoreOS
  • K8S's kubelet is run with the default kubelet wrapper from CoreOS like in https://coreos.com/kubernetes/docs/latest/deploy-master.html#create-the-kubelet-unit
  • We use the hyperkube image: quay.io/coreos/hyperkube, which runs k8s in a rocket container and not natively on the machine
  • The rocket containers has the folders /opt/cni/bin and /etc/cni/net.d inside with things that are required for CNI (flannel, calico binaries and flannel, loopback config files; most notably the loopback binaries, which seem to be always needed)
  • Installing weave-kube creates the pods which create a couple of weave binaries and a 10-weave.conf file on the host machine
  • Now, if we mount the CNI folders from the host machine into the container (/opt/cni/bin /etc/cni/net.d), it fails, since k8s looks like specific binaries (such as "loopback"), which are not there due to the mount
  • We cannot symlink the binaries directly since they need to exist when the kubelet is created (Since the mount points need to exsist)
  • If we created empty weave-net files and mounted them, the weave-kube script would not overwrite them with the real ones since the script checks if they already exists

This leads to a sort of chicken-and-egg problem: We are not able to get the weave binaries in the same folder with the other binaries that k8s needs for CNI to work.

Some solutions we could think of:

  • Create our own hyperkube image with weave binaries, but we would not like to maintain this
  • Download CNI binaries when launching the k8s controller into /opt/cni/bin, but this requires that we keep track of the current uuid specified in the hyperkube's Makefile"

Copied from original issue: weaveworks-experiments/weave-kube#37

@awh
Copy link
Contributor Author

awh commented Nov 8, 2016

From @bboreham on October 13, 2016 10:38

cc @errordeveloper - any ideas?

@awh
Copy link
Contributor Author

awh commented Nov 8, 2016

From @bboreham on October 13, 2016 11:7

Possibly we can use a kubelet option - kubelet will always look in /opt/cni/bin for the loopback plugin, while a different code path will follow --cni-bin-dir to look for 3rd-party plugins.

@mikebryant
Copy link
Collaborator

We have this working.

Note the mounts in the kubelet wrapper to /opt/weave-net - kubelet will look in a vendor dir based on the cni network type for the plugins, which means we still get to use the coreos packaged ones in the hyperkube image in /opt/cni

Our kubelet wrapper:

                - name: kubelet.service
                  enable: true
                  content: |
                    [Service]
                    EnvironmentFile=-/etc/environment
                    Environment="RKT_RUN_ARGS= \
                      --volume=etc-cni,kind=host,source=/etc/cni,readOnly=true \
                      --mount volume=etc-cni,target=/etc/cni \
                      --volume=opt-cni,kind=host,source=/opt/cni,readOnly=true \
                      --mount volume=opt-cni,target=/opt/weave-net \
                      --volume=resolv,kind=host,source=/etc/resolv.conf,readOnly=true \
                      --mount volume=resolv,target=/etc/resolv.conf \
                    "
                    Environment=KUBELET_ACI=quay.io/coreos/hyperkube
                    Environment=KUBELET_VERSION=P_KUBERNETES_VERSION
                    ExecStartPre=/bin/mkdir -p /etc/cni
                    ExecStartPre=/bin/mkdir -p /opt/cni
                    ExecStartPre=/bin/mkdir -p /etc/kubernetes/manifests
                    ExecStartPre=/bin/mkdir -p /srv/kubernetes/manifests
                    ExecStartPre=/bin/mkdir -p /etc/kubernetes/checkpoint-secrets
                    # Pull in advance due to Nexus auth requirements
                    ExecStartPre=/bin/docker pull gcr.docker.tech.lastmile.com/google_containers/pause-amd64:3.0
                    ExecStartPre=/bin/rkt gc --grace-period=0s
                    # Download a copy of the aci from our local mirror, due to really slow http_proxy
                    ExecStartPre=/bin/rkt fetch --insecure-options=image P_S3_ENDPOINT_URL/kubernetes-bootstrap/acis/hyperkube/P_KUBERNETES_VERSION.aci
                    ExecStart=/bin/bash -c '/usr/lib/coreos/kubelet-wrapper \
                      --api-servers=https://P_ELBKubernetesAPI:443 \
                      --kubeconfig=/etc/kubernetes/kubeconfig \
                      --lock-file=/var/run/lock/kubelet.lock \
                      --exit-on-lock-contention \
                      --config=/etc/kubernetes/manifests \
                      --allow-privileged \
                      --cloud-provider=openstack \
                      --cloud-config=/etc/kubernetes/cloud_config \
                      --node-labels=master=true \
                      --minimum-container-ttl-duration=3m0s \
                      --cluster_dns=172.31.128.2 \
                      --cluster_domain=cluster.local \
                      --network-plugin=cni \
                      --network-plugin-dir=/etc/cni/net.d \
                      --pod-infra-container-image=gcr.docker.tech.lastmile.com/google_containers/pause-amd64:3.0 \
                      '
                    Restart=always
                    RestartSec=5
                    [Install]
                    WantedBy=multi-user.target

Our weave manifest:

apiVersion: extensions/v1beta1
kind: DaemonSet
metadata:
  name: weave-net
  namespace: kube-system
spec:
  template:
    metadata:
      labels:
        name: weave-net
      annotations:
        scheduler.alpha.kubernetes.io/tolerations: |
          [
            {
              "key": "dedicated",
              "operator": "Equal",
              "value": "master",
              "effect": "NoSchedule"
            }
          ]
    spec:
      hostNetwork: true
      hostPID: true
      containers:
        - name: weave
          image: hub.docker.tech.lastmile.com/weaveworks/weave-kube:1.9.0
          command:
            - /home/weave/launch.sh
          env:
            - name: CHECKPOINT_DISABLE
              value: "1"
            - name: IPALLOC_RANGE
              value: 172.31.0.0/17
            - name: WEAVE_PASSWORD
              valueFrom:
                secretKeyRef:
                  name: weave
                  key: password
          livenessProbe:
            initialDelaySeconds: 30
            httpGet:
              host: 127.0.0.1
              path: /status
              port: 6784
          securityContext:
            privileged: true
          volumeMounts:
            - name: weavedb
              mountPath: /weavedb
            - name: cni-bin
              mountPath: /host/opt
            - name: cni-bin2
              mountPath: /host/home
            - name: cni-conf
              mountPath: /host/etc
            - name: dbus
              mountPath: /host/var/lib/dbus
          resources:
            requests:
              cpu: 10m
        - name: weave-npc
          image: hub.docker.tech.lastmile.com/weaveworks/weave-npc:1.9.0
          resources:
            requests:
              cpu: 10m
          securityContext:
            privileged: true
      imagePullSecrets:
      - name: mirror-registries
      restartPolicy: Always
      volumes:
        - name: weavedb
          hostPath:
            path: /var/lib/weave-kube
        - name: cni-bin
          hostPath:
            path: /opt
        - name: cni-bin2
          hostPath:
            path: /home
        - name: cni-conf
          hostPath:
            path: /etc
        - name: dbus
          hostPath:
            path: /var/lib/dbus

@roffe
Copy link

roffe commented Sep 12, 2017

Problem still exists with latest coreos and upstream hyperkube image from google ( not coreos specific ) using workaround

          --volume=opt-cni,kind=host,source=/opt/cni,readOnly=true \
          --mount volume=opt-cni,target=/opt/weave-net 

allowed me to install and get it running.

This is a one year old bugg, is weave-net project dead?

@rubencabrera
Copy link

rubencabrera commented Apr 25, 2019

Integrating Istio CNI is trickier than it should, even with the workaround in place, we have to copy files around on an ExecStartPre task. I think this issue should be revisited and worked on.

@errordeveloper
Copy link
Contributor

errordeveloper commented Apr 25, 2019 via email

@rubencabrera
Copy link

Hi @errordeveloper, thanks for following up. I might have been mistaken when assuming this weave behaviour was the root cause of the problem. It seems kubelet can handle this multiple CNI plugin scenario.

Thanks to @YuraBeznos for doing the research on this, the following is a freely edited copy paste of the outcome he shared:

When running running rkt+hyperkube for kubelet, we were trying to have every CNI file in one place, but kubelet checks for different paths:

"%s/opt/%s/bin" where first %s is vendorCNIDirPrefix which is empty and second %s is the "type" value from CNI config

Links to the relevant code:

kubelet reads/rereads CNI configs dir (/etc/cni/net.d) and gets "type" value from every file

kubelet uses this template ("%s/opt/%s/bin" where first %s is vendorCNIDirPrefix which is empty and second %s is the "type" value from CNI config)

to find where actually plugins are (default cni plugins folder checked too)

Istio type is istio-cni

So we are going to test if it works by configuring Istio-cni with cniBinDir=/opt/istio-cni/bin and report what we find out.

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Projects
None yet
Development

No branches or pull requests

6 participants