-
-
Notifications
You must be signed in to change notification settings - Fork 28
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
gefyra run
to set common Kubernetes env variables and ServiceAccount data (K8s cert and token)
#319
Comments
@schwobaseggl Here's my PoC with
apiVersion: v1
kind: Pod
metadata:
name: ubuntu
spec:
containers:
- name: ubuntu
image: ubuntu
command:
- sleep
- infinity
cd /var
tar -cz run/secrets/kubernetes.io -f sa.tar.gz
kubectl cp ubuntu:var/sa.tar.gz ./sa.tar.gz
apt update && apt install -y curl
docker cp sa.tar.gz ubuntu:var/sa.tar.gz
cd /var
tar -xzvf sa.tar.gz
apt update && apt install -y curl In both shells (K3d running Pod and local Gefyra container), this should work: APISERVER=https://kubernetes
# Path to ServiceAccount token
SERVICEACCOUNT=/var/run/secrets/kubernetes.io/serviceaccount
# Read this Pod's namespace
NAMESPACE=$(cat ${SERVICEACCOUNT}/namespace)
# Read the ServiceAccount bearer token
TOKEN=$(cat ${SERVICEACCOUNT}/token)
# Reference the internal certificate authority (CA)
CACERT=${SERVICEACCOUNT}/ca.crt
# Explore the API with TOKEN
curl --cacert ${CACERT} --header "Authorization: Bearer ${TOKEN}" -X GET ${APISERVER}/api
{
"kind": "APIVersions",
"versions": [
"v1"
],
"serverAddressByClientCIDRs": [
{
"clientCIDR": "0.0.0.0/0",
"serverAddress": "172.20.0.2:6443"
}
]
} |
Here's the PoC for a specific ServiceAccount (when given with
apiVersion: v1
kind: ServiceAccount
metadata:
name: mysa
---
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: pod-manager
rules:
- apiGroups: [""]
resources: ["pods"]
verbs: ["*"]
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: manage-pods
subjects:
- kind: ServiceAccount
name: mysa
namespace: default
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: pod-manager
SECRET=$(kubectl get serviceaccount mysa -o json | jq -Mr '.secrets[].name | select(contains("token"))')
TOKEN=$(kubectl get secret ${SECRET} -o json | jq -Mr '.data.token' | base64 -d)
docker exec ubuntu bash -c "echo '`echo $TOKEN`' > /var/run/secrets/kubernetes.io/serviceaccount/token"
TOKEN=$(cat ${SERVICEACCOUNT}/token)
curl --cacert ${CACERT} --header "Authorization: Bearer ${TOKEN}" -X GET ${APISERVER}/api/v1/namespaces/default/pods/
{
"kind": "PodList",
"apiVersion": "v1",
"metadata": {
"resourceVersion": "2498"
},
"items": [
{
"metadata": {
"name": "ubuntu",
"namespace": "default",
"uid": "4cd343e6-7854-4d63-a21c-c4efe479341b",
"resourceVersion": "877",
"creationTimestamp": "2023-03-17T16:26:40Z",
"annotations": {
[...]
"startTime": "2023-03-17T16:26:40Z",
"containerStatuses": [
{
"name": "ubuntu",
"state": {
"running": {
"startedAt": "2023-03-17T16:27:09Z"
}
},
"lastState": {
},
"ready": true,
"restartCount": 0,
"image": "docker.io/library/ubuntu:latest",
"imageID": "docker.io/library/ubuntu@sha256:67211c14fa74f070d27cc59d69a7fa9aeff8e28ea118ef3babc295a0428a6d21",
"containerID": "containerd://5c7bae9481439bd5e35f08caf7446e47b0f30a963ec06cc5d66f9c445a5b919a",
"started": true
}
],
"qosClass": "BestEffort"
}
}
]
} |
gefyra run
to set common Kubernetes env variablesgefyra run
to set common Kubernetes env variables and ServiceAccount data (K8s cert and token)
I'd say for getting the env from a Gefyra phantom Pod in the cluster when apiVersion: v1
kind: Pod
metadata:
name: my-phantom-run-1
spec:
containers:
- name: busybox
image: busybox
command:
- sleep
- infinity And once the Pod is running: kubectl exec my-phantom-run-1 -- env
PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
HOSTNAME=my-phantom-run-1
KUBERNETES_SERVICE_HOST=10.43.0.1
KUBERNETES_SERVICE_PORT=443
KUBERNETES_SERVICE_PORT_HTTPS=443
KUBERNETES_PORT=tcp://10.43.0.1:443
KUBERNETES_PORT_443_TCP=tcp://10.43.0.1:443
KUBERNETES_PORT_443_TCP_PROTO=tcp
KUBERNETES_PORT_443_TCP_PORT=443
KUBERNETES_PORT_443_TCP_ADDR=10.43.0.1
HOME=/root While copying the environment to the local Gefyra container, we should modify the |
@Schille Reproduction with exact steps above.
Cluster spins up w/o issues
All good .... all steps to 5. with no issues Output in k3d pod
Output in gefyra docker container
In gefyra docker container:
|
@Schille |
What is the new feature about?
At the least, those two are required in order to connect a locally running container with the remote K8s API through an internal path:
KUBERNETES_SERVICE_HOST
- ipKUBERNETES_SERVICE_PORT
- portIn addition, it would be a nice to have to get/assign a service account for the local container.
Why would such a feature be important to you?
When writing applications that communicate with K8s API server, it would be important to make the address available to the locally running container.
Anything else we need to know?
No response
The text was updated successfully, but these errors were encountered: