Skip to content
This repository was archived by the owner on May 12, 2021. It is now read-only.

docs: Clean up k8s with cri-containerd howto #251

Merged
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
325 changes: 185 additions & 140 deletions how-to/how-to-use-k8s-with-cri-containerd-and-kata.md
Original file line number Diff line number Diff line change
@@ -1,64 +1,82 @@
# How to use Kata Containers and CRI (containerd plugin) with Kubernetes

This document describes how to set up a single-machine Kubernetes cluster.
The Kubernetes cluster will use the CRI containerd plugin and Kata Containers to launch untrusted workloads.
* [Requirements](#requirements)
* [Install containerd with CRI plugin enabled](#install-containerd-with-cri-plugin-enabled)
* [Install Kata Containers](#install-kata-containers)
* [Install Kubernetes](#install-kubernetes)
* [Configure containerd to use Kata Containers](#configure-containerd-to-use-kata-containers)
* [Define the Kata runtime as the untrusted workload runtime](#define-the-kata-runtime-as-the-untrusted-workload-runtime)
* [Configure Kubelet to use containerd](#configure-kubelet-to-use-containerd)
* [Configure proxy - OPTIONAL](#configure-proxy---optional)
* [Start Kubernetes](#start-kubernetes)
* [Install a Pod Network](#install-a-pod-network)
* [Allow pods to run in the master node](#allow-pods-to-run-in-the-master-node)
* [Create an unstrusted pod using Kata Containers](#create-an-unstrusted-pod-using-kata-containers)
* [Delete created pod](#delete-created-pod)

This document describes how to set up a single-machine Kubernetes (k8s) cluster.

The Kubernetes cluster will use the
[CRI containerd plugin](https://github.com/containerd/cri) and
[Kata Containers](https://katacontainers.io) to launch untrusted workloads.

## Requirements

## Requirements
- Kubernetes, kubelet, kubeadm
- cri-containerd
- Kata Containers

For information about the supported version of these components see
Kata Containers [versions.yaml](https://github.com/kata-containers/runtime/blob/master/versions.yaml) file.
> **Note:** For information about the supported versions of these components,
> see the Kata Containers
> [versions.yaml](https://github.com/kata-containers/runtime/blob/master/versions.yaml)
> file.

## Install containerd(with CRI plugin enabled)
## Install containerd with CRI plugin enabled

Follow the instructions from [CRI installation guide](http://github.com/containerd/cri/blob/master/docs/installation.md)
- Follow the instructions from the
[CRI installation guide](http://github.com/containerd/cri/blob/master/docs/installation.md).

<!---
```bash
# Check if containerd is installed
$ command -v containerd
```
--->

## Install Kata Containers
- Check if `containerd` is now available
```bash
$ command -v containerd
```

Follow the instructions to [install Kata](https://github.com/kata-containers/documentation/blob/master/install/README.md).
## Install Kata Containers

<!---
```bash
# Check if kata-runtime is installed
$ command -v kata-runtime
# Check kata is well configured
$ kata-runtime kata-env
```
--->
Follow the instructions to
[install Kata Containers](https://github.com/kata-containers/documentation/blob/master/install/README.md).

## Install Kubernetes
Install Kubernetes in your host. See kubeadm [installation](https://kubernetes.io/docs/tasks/tools/install-kubeadm/)
<!---
```bash
# Check if kubadm is installed
$ command -v kubeadm
```
--->

### Configure containerd to use Kata Containers
- Follow the instructions for
[kubeadm installation](https://kubernetes.io/docs/setup/independent/install-kubeadm/).

- Check `kubeadm` is now available

```bash
$ command -v kubeadm
```

## Configure containerd to use Kata Containers

The CRI containerd plugin support configuration for two runtime types.
The CRI `containerd` plugin supports configuration for two runtime types.

- Default runtime: A runtime that is used by default to run workloads.
- Untrusted workload runtime: A runtime that will be used run untrusted workloads.
- **Default runtime:**

#### Define the Kata runtime as `untrusted_workload_runtime`
A runtime that is used by default to run workloads.

Configure the Kata runtime for untrusted workload with the [config option](https://github.com/containerd/cri/blob/v1.0.0-rc.0/docs/config.md)
`plugins.cri.containerd.untrusted_workload_runtime`.
- **Untrusted workload runtime:**

A runtime that will be used to run untrusted workloads. This is appropriate
for workloads that require a higher degree of security isolation.

#### Define the Kata runtime as the untrusted workload runtime

Configure `containerd` to use the Kata runtime to run untrusted workloads by
setting the `plugins.cri.containerd.untrusted_workload_runtime`
[config option](https://github.com/containerd/cri/blob/v1.0.0-rc.0/docs/config.md):
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The link here points to a specific, and old, version of containerd. Is that deliberate? Feels wrong. Probably should either point at the master branch - or the version that matches the one in your versions.yaml ? I vote for master.


Unless configured otherwise, the default runtime is set to `runc`.
```bash
# Configure containerd to use Kata as untrusted_workload_runtime
$ sudo mkdir -p /etc/containerd/
$ cat << EOT | sudo tee /etc/containerd/config.toml
[plugins]
Expand All @@ -69,145 +87,172 @@ $ cat << EOT | sudo tee /etc/containerd/config.toml
EOT
```

### Configure Kubelet to use containerd
> **Note:** Unless configured otherwise, the default runtime is set to `runc`.

In order to allow kubelet use containerd (using CRI interface) configure the service to
point to containerd socket.
## Configure Kubelet to use containerd

```bash
# Configure k8s to use containerd
$ sudo mkdir -p /etc/systemd/system/kubelet.service.d/
$ cat << EOF | sudo tee /etc/systemd/system/kubelet.service.d/0-containerd.conf
[Service]
Environment="KUBELET_EXTRA_ARGS=--container-runtime=remote --runtime-request-timeout=15m --container-runtime-endpoint=unix:///run/containerd/containerd.sock"
EOF
$ sudo systemctl daemon-reload
```
In order to allow kubelet to use containerd (using the CRI interface), configure the service to point to the `containerd` socket.

- Configure Kubernetes to use `containerd`

```bash
$ sudo mkdir -p /etc/systemd/system/kubelet.service.d/
$ cat << EOF | sudo tee /etc/systemd/system/kubelet.service.d/0-containerd.conf
[Service]
Environment="KUBELET_EXTRA_ARGS=--container-runtime=remote --runtime-request-timeout=15m --container-runtime-endpoint=unix:///run/containerd/containerd.sock"
EOF
```

- Inform systemd about the new configuration

```bash
$ sudo systemctl daemon-reload
```

### Optional: Configure proxy
## Configure proxy - OPTIONAL

If you are behind a proxy this script will configure your proxy for docker
kubelet and containerd.
If you are behind a proxy, use the following script to configure your proxy for docker, kubelet, and containerd:

```bash
# Set proxys
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I wonder if coding this as a #!/bin/bash script here will make it easier for a cut/paste for the user. I know we don't tend to do that for other inline scripts, so am happy to leave as is as well.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Good point - I've simplified that block to not require bash now.

$ services=(
'kubelet'
'containerd'
'docker'
)
$ services="
kubelet
containerd
docker
"

$ for s in "${services[@]}"; do
$ for service in ${services}; do

service_dir="/etc/systemd/system/${s}.service.d/"
sudo mkdir -p ${service_dir}
service_dir="/etc/systemd/system/${service}.service.d/"
sudo mkdir -p ${service_dir}

cat << EOT | sudo tee "${service_dir}/proxy.conf"
cat << EOT | sudo tee "${service_dir}/proxy.conf"
[Service]
Environment="HTTP_PROXY=${http_proxy}"
Environment="HTTPS_PROXY=${https_proxy}"
Environment="NO_PROXY=${no_proxy}"
EOT
done

$ sudo systemctl daemon-reload
```

### Start Kubernetes with kubeadm
## Start Kubernetes

```bash
# Mark sure containerd is up and running
$ sudo systemctl restart containerd
$ sudo systemctl status containerd
- Make sure `containerd` is up and running

# Prevent docker iptables rules conflict with k8s pod communication
$ sudo iptables -P FORWARD ACCEPT
```bash
$ sudo systemctl restart containerd
$ sudo systemctl status containerd
```

# Start cluster using kubeadm
$ sudo kubeadm init --skip-preflight-checks \
--cri-socket /run/containerd/containerd.sock --pod-network-cidr=10.244.0.0/16
- Prevent conflicts between `docker` iptables (packet filtering) rules and k8s pod communication

$ export KUBECONFIG=/etc/kubernetes/admin.conf
If Docker is installed on the node, it is necessary to modify the rule
below. See https://github.com/kubernetes/kubernetes/issues/40182 for further
details.

$ sudo -E kubectl get nodes
$ sudo -E kubectl get pods
```
```bash
$ sudo iptables -P FORWARD ACCEPT
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ideally I'd like a touch more info around what this is fixing, how, and why it's not a gaping security hole (or, a caveat note if it is ;-) ).

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Well, this PR is for reworking the formatting but I'm happy to take input from @jcvenegas with more details... :)

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@jodh-intel this is used for the case that docker is installed in the node, docker set iptables rules that not allow containers communication, the issue is kubernetes/kubernetes#40182

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I did not test if this is not needed anymore, but keep as legacy from our initial k8s installation script.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks @jcvenegas - branch updated.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@grahamwhaley, @jcvenegas - can you provide some suitable "security hole or not" words I can add?

/cc @mcastelino.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Phew. Well, that thread over on kubernetes/kubernetes#40182 shows some less broad-spectrum fixes, and also points to what look like two supported solutions:
containernetworking/plugins#75
kubernetes/kubernetes#52569

From reading the thread, it seems the above fix as is is only transient, and not 'sticky' over reboots as well. I think we should say something like:

For testing, temporarily apply the following rule. Please see thread https://github.com/kubernetes/kubernetes/issues/40182 for more details on how to configure the k8s networking to route Docker traffic on your system.

But, let's get some feedback on if that is the right thing...

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We didn't resolve this. I'm not happy with it as is, as it applies a broad spectrum port forwared/allow, which is then not sticky. We need a decision. /cc @mcastelino @egernst .

```

### Install a pod network
Install a pod network plugin is needed to allow pods communicate with each other.
- Start cluster using `kubeadm`

Install flannel plugging, by following the instructions in the section *Installing a pod network*
from [Using kubeadm to Create a Cluster ](https://kubernetes.io/docs/setup/independent/create-cluster-kubeadm/)
guide.
```bash
$ sudo kubeadm init --skip-preflight-checks --cri-socket /run/containerd/containerd.sock --pod-network-cidr=10.244.0.0/16
$ export KUBECONFIG=/etc/kubernetes/admin.conf
$ sudo -E kubectl get nodes
$ sudo -E kubectl get pods
```

<!---
```bash
# Install a pod network using flannel
# There is not a programmatic way to know last what flannel commit use
# See https://github.com/coreos/flannel/issues/995
$ sudo -E kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml
```
--->
## Install a Pod Network

A pod network plugin is needed to allow pods to communicate with each other.

```bash
# wait for pod network
$ timeout_dns=0
$ until [ "$timeout_dns" -eq "420" ]; do
if sudo -E kubectl get pods --all-namespaces | grep dns | grep Running; then
break
fi
sleep 1s
((timeout_dns+=1))
done

# check pod network is running
$ sudo -E kubectl get pods --all-namespaces | grep dns | grep Running && echo "OK" || ( echo "FAIL" && false )
```
- Install the `flannel` plugin by following the
[Using kubeadm to Create a Cluster](https://kubernetes.io/docs/setup/independent/create-cluster-kubeadm/#instructions)
guide, starting from the **Installing a pod network** section.

### Allow run pods in master node
- Create a pod network using flannel

By default, the cluster will not schedule pods in the master node to allow that run:
> **Note:** There is no known way to determine programmatically the best version (commit) to use.
> See https://github.com/coreos/flannel/issues/995.

```bash
# allow master node run pods
$ sudo -E kubectl taint nodes --all node-role.kubernetes.io/master-
```
```bash
$ sudo -E kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml
```

- Wait for the pod network to become available

### Create a unstrusted pod using Kata Containers
```bash
# number of seconds to wait for pod network to become available
$ timeout_dns=420

By default, all pods are created with the default runtime configured in CRI containerd plugin.
If a pod has the `io.kubernetes.cri.untrusted-workload annotation` set as
`"true"`, the CRI plugin will run the pod with the Kata Containers runtime.
$ while [ "$timeout_dns" -gt 0 ]; do
if sudo -E kubectl get pods --all-namespaces | grep dns | grep Running; then
break
fi

```bash
# Create untrusted pod configuration
$ cat << EOT | tee nginx-untrusted.yaml
apiVersion: v1
kind: Pod
metadata:
name: nginx-untrusted
annotations:
io.kubernetes.cri.untrusted-workload: "true"
spec:
containers:
- name: nginx
image: nginx

EOT
sleep 1s
((timeout_dns--))
done
```

# Create untrusted pod
$ sudo -E kubectl apply -f nginx-untrusted.yaml
- Check the pod network is running
```bash
$ sudo -E kubectl get pods --all-namespaces | grep dns | grep Running && echo "OK" || ( echo "FAIL" && false )
```

# Check pod is running
$ sudo -E kubectl get pods
## Allow pods to run in the master node

# Check qemu is running
$ ps aux | grep qemu
By default, the cluster will not schedule pods in the master node. To enable master node scheduling:

```bash
$ sudo -E kubectl taint nodes --all node-role.kubernetes.io/master-
```
### Delete created pod

## Create an unstrusted pod using Kata Containers

By default, all pods are created with the default runtime configured in CRI containerd plugin.

If a pod has the `io.kubernetes.cri.untrusted-workload` annotation set to `"true"`, the CRI plugin runs the pod with the
[Kata Containers runtime](https://github.com/kata-containers/runtime/blob/master/README.md).

- Create an untrusted pod configuration

```bash
$ cat << EOT | tee nginx-untrusted.yaml
apiVersion: v1
kind: Pod
metadata:
name: nginx-untrusted
annotations:
io.kubernetes.cri.untrusted-workload: "true"
spec:
containers:
- name: nginx
image: nginx

EOT
```

- Create an untrusted pod
```bash
$ sudo -E kubectl apply -f nginx-untrusted.yaml
```

- Check pod is running

```bash
$ sudo -E kubectl get pods
```

- Check hypervisor is running
```bash
$ ps aux | grep qemu
```

## Delete created pod

```bash
# Delete pod
$ sudo -E kubectl delete -f nginx-untrusted.yaml
$ sudo -E kubectl delete -f nginx-untrusted.yaml
```