Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Don't limit CODER_NAMESPACE to a single namespace #5

Open
hh opened this issue Jul 10, 2023 · 5 comments
Open

Don't limit CODER_NAMESPACE to a single namespace #5

hh opened this issue Jul 10, 2023 · 5 comments
Labels
enhancement New feature or request

Comments

@hh
Copy link

hh commented Jul 10, 2023

From https://kubernetes.io/docs/concepts/overview/working-with-objects/namespaces/ :

There are benefits to deploying per-user namespaces:

  • Ability to give the user control over their own namespace via RBAC (deploying other objects / API Isolation)
  • Ability to persist expensive objects like cert-manager certs / let encrypt (some objects take a lot of time)
  • Ability to isolate traffic between multiple users / namespaces

We create a namespace per user, and do not destroy it when a workspace is torn down. This allows expensive objects (like cert-manager/letsencrypt certs/dns) to persist and be reused for multiple workspaces (from the same user) to access them.

Some resources we use per user/namespace:

  • Issuer (Cert-Manager w/ DNS01 for wildcard)
  • Certificate (this can take 40 seconds to provision from Lets Encrypt)
  • tls-secret (generated by TLS Cert from Certificate)
  • wildcard ingress (each user get's there own namespace AND *.username.coder.website [accessible without coder])
  • RoleBinding w/ admin over their own namespace (we allow them to create whatever other resources they want within their namespace) : RBAC
  • We use Resource Quotas to ensure one user doesn't take over all the resources on a node
@bpmct
Copy link
Member

bpmct commented Jul 10, 2023

I see. How are you currently provisioning per-user resources? This is actually a feature we've considered doing in Coder.

I think there may be some limitations with Helm around provisioning multi-namespace resources. @hh - Would it be possible to also provision a coder-logstream-kube per-user/namespace as well? That may be a nice workaround.

@hh
Copy link
Author

hh commented Jul 10, 2023

Ideally everything runs within the namespace we create for them, what token would the coder-logstream-kube pod use?

https://github.com/cloudnative-coop/space-templates/blob/canon/equipod/namedspaced.tf#L6-L15

resource "null_resource" "namespace" {
  # install kubectl
  provisioner "local-exec" {
    command = "~/kubectl version --client || (curl -L https://dl.k8s.io/release/v1.27.3/bin/linux/amd64/kubectl -o ~/kubectl && chmod +x ~/kubectl)"
  }
  provisioner "local-exec" {
    command = "~/kubectl create ns ${local.spacename}"
  }
  provisioner "local-exec" {
    command = "~/kubectl -n ${local.spacename} apply -f ${path.module}/manifests/admin-sa.yaml"
  }
}

https://raw.githubusercontent.com/cloudnative-coop/space-templates/canon/equipod/manifests/admin-sa.yaml

---
apiVersion: v1
kind: ServiceAccount
metadata:
  name: admin
automountServiceAccountToken: true
---
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
  name: admin
rules:
  - apiGroups:
      - ""
    resources:
      - "*"
    verbs:
      - "*"
  - apiGroups:
      - "*"
    resources:
      - "*"
    verbs:
      - "*"
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
  name: admin
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: Role
  name: admin
subjects:
  - kind: ServiceAccount
    name: admin

@bpmct
Copy link
Member

bpmct commented Jul 10, 2023

I see. Currently, coder-logstream-kube runs within the namespace and doesn't require a token! It uses the token from each workspace's pod spec (which is scoped to only send agent logs/stats for the specific workspace).

helm install coder-logstream-kube coder-logstream-kube/coder-logstream-kube \
    --namespace coder \
    --set url=<your-coder-url>

@sreya
Copy link
Collaborator

sreya commented Feb 15, 2024

@hh are you still looking to do a single logstream-kube deployment for multiple workspaces? Wondering if this is worth supporting or not. You'll have to be ok with a cluster role/binding if you don't want to limit to a single namespace.

@matifali
Copy link
Member

from #28

v0.0.9-rc.0 - use-case: one logstream-kube deployment watching pods in multiple namespaces.

if the namespace value is unset, logstream-kube should default to watching pods in all namespaces (assuming the proper permissions). this is not currently the case. my values, installed in the coder namespace:

USER-SUPPLIED VALUES:
url: https://eric-aks.demo.coder.com

my workspace is running in the coder-workspaces namespace, with the following role and rolebinding deployed:

apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
  name: coder-logstream-kube-role
rules:
- apiGroups: [""]
  resources: ["pods", "events"]
  verbs: ["get", "watch", "list"]
- apiGroups: ["apps"]
  resources: ["replicasets", "events"]
  verbs: ["get", "watch", "list"]

---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
  name: coder-logstream-kube-rolebinding
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: Role
  name: coder-logstream-kube-role
subjects:
- kind: ServiceAccount
  name: coder-logstream-kube
  namespace: coder

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request
Projects
None yet
Development

No branches or pull requests

4 participants