Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Authentication guidelines #574

Closed
cescoferraro opened this issue Mar 24, 2016 · 56 comments
Closed

Authentication guidelines #574

cescoferraro opened this issue Mar 24, 2016 · 56 comments
Assignees

Comments

@cescoferraro
Copy link

Issue details

Environment
Dashboard version: kubernetes-dashboard-canary
Kubernetes version: v1.2
Operating system: CoreOS 899 self hosted on Digital Ocean through Terraform
Go version: go version go1.6 darwin/amd64
Steps to reproduce

i am deploying this kubernetes-dashboard-canary

Observed result

using the insecure port
- --apiserver-host=http://{{ .Env.MASTER_PRIVATE }}:8080
it works but I have no restriction to who access the dashboard.

using the secure port
- --apiserver-host=https://{{ .Env.MASTER_PRIVATE }}:6443
i get
Get https://10.134.19.15:6443/api/v1/replicationcontrollers: x509: failed to load system roots and no roots provided
as expected because the pod ip in not on the certificates I generated as the kube-apiserver confirms
Mar 24 22:59:32 master kube-apiserver[1298]: I0324 22:59:32.632002 1298 logs.go:41] http: TLS handshake error from 10.100.68.2:45592: remote error: bad certificate

I am using simple authentication to the api, I was expecting something like this for the dashboard as well. What the dashboard way of doing this?
Do I have to build a new docker-image copying the certificates to /etc/kubernetes/ssl/??

@bryk
Copy link
Contributor

bryk commented Mar 29, 2016

@luxas Can you chime in?

@bryk
Copy link
Contributor

bryk commented Mar 29, 2016

@cescoferraro When you run the Dashboard UI inside your cluster, the UI should attempt to securely connect to the master using serviceaccounts. This is our default behaviour.

@bryk
Copy link
Contributor

bryk commented Mar 29, 2016

Re securing the Dashboard UI itself: at the moment you should secure it by accessing it via service proxy or through firewalled private network. We're working on IAM/auth support in the meantime.

@luxas
Copy link
Member

luxas commented Mar 29, 2016

@bryk FYI, this PR will make local docker users more happy: kubernetes/kubernetes#23550

In this case @cescoferraro, I think you should set

--root-ca-file=/home/core/ssl/ca.pem
--service-account-private-key-file=/home/core/ssl/kubelet-key.pem

as @antoineco pointed out.
And if you sign /home/core/ssl/* certs with 10.100.68.2 as one SAN, I think it should be OK.

@cescoferraro
Copy link
Author

I got it all working now. I was having problems with the certificates. I was signing those with the cloudfaressl tool, now I create all with easy_rsa and leave the cffssl toolkit to generate the etcd certificates. There is something to do with the fact that kubelet expect not encrypted certificates files. Anyway.

The point I want to raise here, is that you MUST have the --basic-auth-file flag on the kube-apiserver call to authenticate the dashboard without a service proxy( I am using romulus btw, and it is great!). Token authentication is not enough because the browser does not know how to ask for a token so it lets everybody in, but it knows how to ask for user/admin. And its not just for the dashboard. Kubedash behaves the same way, and the documentation also lacks this info.

This leave newbie users lost like me, especially on bare-metal installations. The serviceaccounts default beahvior is not explicitly said leaving valuable info like what @luxas just showed us is hidden under issues comments. I can submit a PR with some 101 ABC guide for kube-system app on bare metal if that is something useful for the repo.

@luxas
Copy link
Member

luxas commented Mar 29, 2016

@cescoferraro Yes, please start writing a doc about this and I'll review and maybe fill in some details

@bryk
Copy link
Contributor

bryk commented Mar 30, 2016

@cescoferraro We're aware of the issues on the Dashboard UI side. This will be fixed. No concrete ETA though.

@bryk bryk added this to the v1.1 milestone Apr 27, 2016
@euprogramador
Copy link

It would be good if the setting were made via kubeconfig file, my cluster is configured for client certificate for communication with apiserver, dasboard does not support client certificate.

@cescoferraro
Copy link
Author

this is related
kubernetes-retired/contrib#877

@bryk
Copy link
Contributor

bryk commented May 4, 2016

It would be good if the setting were made via kubeconfig file, my cluster is configured for client certificate for communication with apiserver, dasboard does not support client certificate.

It actually should support kubeconfig files. It works same way as kubectl. I.e., you should mount the config file in a specific directory and that's all. Can you verify this?
If this does not work, the fix should be trivial.

@euprogramador
Copy link

where to put the kubeconfig file?

@bryk
Copy link
Contributor

bryk commented May 5, 2016

All right, I've done some investigation on this.

Currently kubeconfig files do not work, but I have a fix for this problem that will go into a next release. After the release you'll be able to use your kubeconfig file from kubectl to use the UI. In a cluster you can mount it to ~/.kube/config or mount anywhere and define it by a variable KUBECONFIG=/path/to/file

@bryk bryk self-assigned this May 5, 2016
@bryk
Copy link
Contributor

bryk commented May 5, 2016

More about kubeconfig: http://kubernetes.io/docs/user-guide/sharing-clusters/

@CPlusPlus17
Copy link

@luxas
Where should I set exactly?

--root-ca-file=/home/core/ssl/ca.pem
--service-account-private-key-file=/home/core/ssl/kubelet-key.pem

API server runs with:

KUBE_ADMISSION_CONTROL="--admission_control=NamespaceLifecycle,NamespaceExists,LimitRanger,SecurityContextDeny,ServiceAccount,ResourceQuota"
KUBE_API_ARGS="--tls-cert-file=/srv/ssl/api/api.crt --tls-private-key-file=/srv/ssl/api/api.key --client-ca-file=/srv/ssl/api/ca.crt --service-account-key-file=/srv/ssl/api/api.key --secure-port=443 --bind-address=10.135.12.156 --etcd-cafile=/srv/ssl/etcd/ca.crt --apiserver-count=6

I get this error:
image

Arguments of the dashboard:

          args:
            # Uncomment the following line to manually specify Kubernetes API server Host
            # If not specified, Dashboard will attempt to auto discover the API server and connect
            # to it. Uncomment only if the default does not work.
            - --apiserver-host=https://xxx:443

Thanks
Manuel

@CPlusPlus17
Copy link

CPlusPlus17 commented May 18, 2016

I did some more testing:

kube-apiserver[7065]: I0518 10:50:26.104417    7065 logs.go:41] http: TLS handshake error from xx.xx.xx.xx:56083: remote error: bad certificate

Is logged when accessing: https://host/api/v1/proxy/namespaces/kube-system/services/kubernetes-dashboard/#/replicationcontrollers

i validated the access tokens:

curl -k -H "Authorization: Bearer $TOKEN_VALUE" https://host:443/api/v1/replicationcontrollers
{
  "paths": [
    "/api",
    "/api/v1",
    "/apis",
    "/apis/autoscaling",
    "/apis/autoscaling/v1",
    "/apis/batch",
    "/apis/batch/v1",
    "/apis/extensions",
    "/apis/extensions/v1beta1",
    "/healthz",
    "/healthz/ping",
    "/logs/",
    "/metrics",
    "/resetMetrics",
    "/swagger-ui/",
    "/swaggerapi/",
    "/ui/",
    "/version"
  ]
}

Should the dashboard not use the token for authentication? Why is the bad certificate thrown? I don't understand currently whats happening.

Greets
Manuel

Update:

df
Filesystem                       1K-blocks    Used Available Use% Mounted on
/dev/mapper/rhel_wbf--v3030-root  37173520 3516400  33657120  10% /
devtmpfs                           8124168       0   8124168   0% /dev
tmpfs                              8134356       0   8134356   0% /dev/shm
tmpfs                              8134356   25200   8109156   1% /run
tmpfs                              8134356       0   8134356   0% /sys/fs/cgroup
/dev/sda1                           508588  171252    337336  34% /boot
tmpfs                              1626872       0   1626872   0% /run/user/0
tmpfs                              8134356      12   8134344   1% /var/lib/kubelet/pods/16d22d4a-1cd5-11e6-a50f-005056966f27/volumes/kubernetes.io~secret/default-token-ycpkv
ll /var/lib/kubelet/pods/16d22d4a-1cd5-11e6-a50f-005056966f27/volumes/kubernetes.io~secret/default-token-ycpkv/
total 12
-r--r--r--. 1 root root 1874 May 18 10:47 ca.crt
-r--r--r--. 1 root root   11 May 18 10:47 namespace
-r--r--r--. 1 root root 1197 May 18 10:47 token

Token is correct and ca.crt too.

@cescoferraro
Copy link
Author

Is The xxx ip privarem or public? It should be public. I use kubedns To work this out

@CPlusPlus17
Copy link

CPlusPlus17 commented May 18, 2016

This is the public IP-Address. In fact it's the IP-Address of the LB in front of the API.
What you mean with kubedns and work this out?

Update:

If adjusted to certificate to contain the ip address in the error, but I still getting the same error:

kube-apiserver[2219]: I0518 16:10:05.571834    2219 logs.go:41] http: TLS handshake error from xxx.xxx.xxx.xxx:45848: remote error: bad certificate

@cescoferraro
Copy link
Author

cescoferraro commented May 18, 2016

Sorry I meant private. So kubedns knows where that the api server is at kubernetes.local which is default place the dashboard will look for the apiserver.
To use the public ip you will need to make sure you add them to your certs beforehand

@CPlusPlus17
Copy link

CPlusPlus17 commented May 19, 2016

I played a bit more around:

FROM gcr.io/google_containers/kubernetes-dashboard-amd64:canary
ADD ca.crt /etc/ssl/certs/ca-certificates.cr

After that I now get:

the server has asked for the client to provide credentials (get replicationControllers)

kubectl describe pod kubernetes-dashboard-kmfyu --namespace=kube-system

Volumes:
  default-token-7nsaq:
    Type:       Secret (a volume populated by a Secret)
    SecretName: default-token-7nsaq

Secrets are mounted to /var/run/secrets/kubernetes.io/serviceaccount.
I can access the token without problems.

If I run curl in the container:

curl -k -H "Authorization: Bearer $TOKEN_VALUE" https://host:443
{
  "paths": [
    "/api",
    "/api/v1",
    "/apis",
    "/apis/autoscaling",
    "/apis/autoscaling/v1",
    "/apis/batch",
    "/apis/batch/v1",
    "/apis/extensions",
    "/apis/extensions/v1beta1",
    "/healthz",
    "/healthz/ping",
    "/logs/",
    "/metrics",
    "/resetMetrics",
    "/swagger-ui/",
    "/swaggerapi/",
    "/ui/",
    "/version"
  ]
}

So everything should be setup right? But why dashboard can not connect to the API?

@cescoferraro
Copy link
Author

You should not need to add the certificates to the image. This should be done by service accounts at pod creation.

@maciaszczykm maciaszczykm modified the milestones: Post 1.1, v1.1 Jun 24, 2016
@kfox1111
Copy link

kfox1111 commented Sep 2, 2016

I've spent a while trying to secure this service and am stuck...

I've got some containers that run in the cluster that can run users custom code, so they will be able to talk to the backend kube-dashboard ip if I load it. kube-dashboard wants to use the service account creds rather then basic auth when enabled on the api letting anyone in.

If I try and override it by specifying api server by --apiserver-host it fails to auth. If I do the env KUBECONFIG thing it doesn't work. I use to be able to override the ca like so in 1.0.x
command: [/bin/sh, -c, 'mkdir -p /usr/local/share/ca-certificates/kube/; cp -a /var/run/secrets/kubernetes.io/serviceaccount/ca.crt /usr/local/share/ca-certificates/kube/kube-ca.crt; update-ca-certificates; exec /dashboard --apiserver-host=https://...']
but thats not working anymore.

If I give up and add an apache proxy in front to do the auth (not great as it still uses the service account, but still better then no auth) then I can get it to proxy, but can't seem to bind the dashboard only to 127.0.0.1 so its easy to bypass the auth.

So, I'm stuck not being able to deply the dashboard. Any ideas?

@bryk
Copy link
Contributor

bryk commented Sep 5, 2016

[/bin/sh, -c, 'mkdir -p /usr/local/share/ca-certificates/kube/; cp -a /var/run/secrets/kubernetes.io/serviceaccount/ca.crt /usr/local/share/ca-certificates/kube/kube-ca.crt; update-ca-certificates; exec /dashboard --apiserver-host=https://...']

When did this work? I think that this never worked. We always had scratch images. Did you use an image from a different source?

@antoineco
Copy link

antoineco commented Jun 9, 2017

@unixwitch @andybrucenet this privilege escalation is so bad it would deserve its own Issue in order to catch the attention of the maintainers.
Glad I stumbled upon this thread.

@cheld
Copy link
Contributor

cheld commented Jun 9, 2017

There you have it: #964

@unixwitch NodePort has been removed. Authentication is typically handled by apiserver.

However, it is correct that Dashboard is using service account to access apiserver which is kind of a privelige escalation.

@floreks does this PR #1539 solve it completly?

@unixwitch
Copy link

NodePort has been removed. Authentication is typically handled by apiserver

That reduces the attack surface somewhat, but in the current release version it's still possible for any pod to escalate privileges - or, in an environment like ours where pod IP addresses are routed on our internal corporate network, by anyone who can connect to the pod - right?

Until changes from #964 are in the release version linked from https://github.com/kubernetes/dashboard, I still feel this should be clearly noted on that page. I had no idea this problem existed until one day I wondered how Dashboard did authentication, and found out it didn't (and immediately removed it from our cluster).

@floreks
Copy link
Member

floreks commented Jun 9, 2017

I have to check how kubectl handles token argument because there might be misunderstanding here of how we handle authentication and authorization.

I can guarantee that if everything is set up correctly then there is no way of escalating the privileges.

I'll try to check later or tomorrow if kubectl token argument adds header that we require on every request to dashboard.

However you access dashboard in order to handle authorization we require Authorization: Bearer <token> header to be present on every request. Thanks to that we dynamically create apiserver client and pass this token on every call to apiserver so it will respond with error unauthorized if user don't have access to a resource.

You can actually revoke all permissions for dashboard service account except health check and fully rely on authorization header. Someone described on our sig-ui slack channel that it works fine together with auth proxy in front of dashboard.

I'm writing from my phone so can't check everything right now but I'll get back to you with more details.

Sent from my LGE Pixel using FastHub

@cheld
Copy link
Contributor

cheld commented Jun 9, 2017

I think the concern was that every pod in the overlay can ping the dashboad pod and can "press the deploy an app button" using curl or something else. In addition they spread the overlay network to the hole company network...

Well I guess this is correct. you might want to use a overlay network with namespace isolation

@floreks
Copy link
Member

floreks commented Jun 9, 2017

Network isolation is a separate concern and cluster operator should take care of that.

Anyway if admin would revoke all privileges to dashboard SA and will rely on auth header then pod to pod communication won't be an issue because dashboard SA won't have privileges to do anything.

Sent from my LGE Pixel using FastHub

@unixwitch
Copy link

you might want to use a overlay network with namespace isolation

But how would I know that I need to do that, since this behaviour is not documented? I'm not saying anything in Dashboard itself should change, only that this default behaviour should be noted prominently in the documentation, because it's neither obvious nor expected. (I see that this is being fixed properly with delegated authorization, which is great, but that's not related to my point here until it becomes the default.)

if admin would revoke all privileges to dashboard SA and will rely on auth header then pod to pod communication won't be an issue because dashboard SA won't have privileges to do anything

Great, but the provided manifest doesn't do that, it grants the cluster-admin clusterrole to dashboard.

I am not trying to be a pain here, but as currently implemented, deploying Dashboard in a cluster will completely bypass any cluster authorization configuration (RBAC/ABAC/webhook/etc.), and there is no way for cluster administrators to know that this is the case. I don't really understand why this isn't seen as a problem.

Is there something I'm missing here? If an Authorization header field is required to process any HTTP request, and Dashboard will reject any request without one, then the problem would be fixed, but as I understand it that isn't (currently) the case, is it?

@liggitt
Copy link
Member

liggitt commented Jun 10, 2017

Anyway if admin would revoke all privileges to dashboard SA and will rely on auth header then pod to pod communication won't be an issue because dashboard SA won't have privileges to do anything.

The default dashboard manifest should not expose the cluster this way, it should only deploy the dashboard.

If the dashboard wants to have cluster-admin power itself, that should be done as a separate step, with clear instructions/warnings that any user accessing the dashboard will be able to act with cluster-admin permissions via the dashboard, and that network isolation steps are necessary to prevent access from other pods via the pod or service IPs.

@floreks
Copy link
Member

floreks commented Jun 10, 2017

@unixwitch @liggitt Of course I understand your point of view. We will update our documentation and notify users loud and clear that using this method to deploy dashboard will not be secure in some scenarios.

On the other hand from my perspective we have 3 groups of people that use dashboard.

  1. People that have very little experience with Kubernetes and they would like to just copy & paste some command, open browser page and see working dashboard.
  2. People with some experience with Kubernetes. They know the security concepts but not always know how to set up everything.
  3. Advanced users that just take application as is and can analyze and adjust configuration to their actual needs.

From what we see recently looking at issues there are many more people in the first group. After kubernetes 1.6 was released and new restrictions were added to default RBACs we've had plenty of similar issues created related to that. We've decided then that we should provide as simple as possible mechanism of deploying and running dashboard which by default will offer full cluster admin privileges.

From time to time we have also this kind of issues (and they are justified) where people don't know how to use recently added authorization mechanism or are doing it wrong. It's on us because our documentation needs to be updated and we have to make some things clear.

Regarding exposing dashboard with full privileges by default I don't agree that we should change that. I have already provided few reasons why and additionally there are many use cases in which this is the correct behavior. We have already removed exposing dashboard using the NodePort which increases security.

We have to remember that people use kubernetes in many different ways and your use cases that would require more restrictive default setup is not the only one. Of course we could add more dashboard yamls with different security configs but then how can we know which config is generic enough and will work for everyone? Should we revoke all rights? Should we make it read only? Should we allow only access to default namespace?

To sum up I think that we should update our documentation but leave current behavior as default one with remark that by default dashboard has full access to the cluster and user has to take care of security setup if default one is not good for him.

@floreks
Copy link
Member

floreks commented Jun 10, 2017

Regarding issues described by @andybrucenet with kubectl proxy then if I remember correctly this proxy only presents authorization header to apiserver and auth header is not present when request reaches dashboard. That is why dashboard falls back to use SA mounted into the pod to authorize itself.

There were some plans to make apiserver proxy requests to addons. Until something like this is implemented kubectl proxy is not the right way to access dashboard if you want to authorize users.

kubernetes/kubernetes#29714

@liggitt
Copy link
Member

liggitt commented Jun 10, 2017

Regarding exposing dashboard with full privileges by default I don't agree that we should change that.

Insecure by default is a bad default. For better or worse, the manifest the dashboard provides is referenced in many places as "the way" to deploy the dashboard.

It would be far better to make the default deployment just deploy the dashboard, and make the dashboard respond to a lack of API access by displaying options/instructions to the person accessing the dashboard:

  • Instructions to access it with an authorization token to use the dashboard with your own API permissions
  • Instructions on how to give the dashboard service account API access (accompanied with the disclaimer about it allowing escalation to anyone who can access the dashboard)

That keeps the user experience intact but requires opt-in by cluster admins before opening up API access via the dashboard.

@floreks
Copy link
Member

floreks commented Jun 10, 2017

I agree that this would improve overall security but on the other hand it would decrease simplicity of deploying and running dashboard. We would end up again with dozens of issues saying that users can't access dashboard.

I think we can meet in the middle here. We can split dashboard deployment into 2 parts: application and rbac. Similar to what flannel does. Then we can prepare 2 rbac files: minimal and full access. This way in the beginning user has to choose what he needs instead of making this choice for him. With minimal rbac user will have to take care of authorization.

@liggitt
Copy link
Member

liggitt commented Jun 10, 2017

Splitting the manifest is definitely a step in the right direction, sure.

The "grant cluster-admin to the dashboard" manifest should not just be called "dashboard-rbac". It should be accompanied by a very clear warning that unless significant care is taken (with network isolation of the pod and service IPs, no exposure via ingress, etc), it defeats the point of having RBAC enabled in the cluster and allows anyone with access to the pod or service IP network cluster-admin permissions.

If dashboard were not part of the kubernetes org, or was not enabled by default in deployments, it would have more freedom to make compromises in this area, but as-is, I expect the dashboard to preserve cluster security by default.

kubernetes/kubernetes#23201 reported this issue in 1.1, but the dashboard was re-enabled by default in 1.2 without addressing this issue. With RBAC enabled in GCE/GKE in 1.6, those deployments should reconsider whether it's appropriate to give the dashboard cluster-admin permissions out of the box.

@liggitt
Copy link
Member

liggitt commented Jun 10, 2017

@cjcullen ^

@liggitt
Copy link
Member

liggitt commented Jun 10, 2017

We would end up again with dozens of issues saying that users can't access dashboard.

What about the suggestion that the dashboard present information about how to give it API access in-app. That makes it pretty clear that there is nothing broken, and what the admin needs to do if they want the dashboard to have API access itself.

@bryk
Copy link
Contributor

bryk commented Jun 12, 2017

@floreks @liggitt These are valuable comments, thanks!

  1. Re splitting YAML config files: I see that we're in agreement here. Can we treat this as AI to have "secure" YAMLs and "insecure" ones?

  2. Re what should be the default: @floreks, I think that default config should be secure. Security is always highest priority. We can explore some creative ideas here on what to do. We could do read-only secure setup by default and use config maps for configuring it into unsecure one. WDYT?

@rf232
Copy link
Contributor

rf232 commented Jun 12, 2017

Even read only can be a privilege escalation, imagine user with just access to namespace 'a' and guesses access of dashboard (or has read access to kube-system) and then through the dashboard they also have read access to namespace 'b', so while I think readonly access is a valid option it is not strong enough for a true secure config

@bryk
Copy link
Contributor

bryk commented Jun 12, 2017

Yeah, correct. How about the kube-proxy solution that'd forward user's creds? This would be secure by default, wouldn't it?

@rf232
Copy link
Contributor

rf232 commented Jun 12, 2017

@bryk You mean the proposal in kubernetes/kubernetes#29714 ?
If so, the goal of the proposal is yes. But there are still some questions on how this should exactly work, and it might be prudent to harden our stuff here first and as soon as something is done there to make sure that works with our hardened setup.

(e.g. require some form of auth token unless we give a flag/env var to the dashboard called something like "UNSAFE_I_KNOW_WHAT_I_AM_DOING" in the spec, and actually implement one or two reference sidecar containers that are able to log in and proxy to the dashboard while adding this token)

@floreks
Copy link
Member

floreks commented Jun 13, 2017

I think we should also remember that not everyone uses kubernetes cluster as a "playground" for users/devs. There are many use cases where only trusted set of people have access to the cluster. They are in charge of deploying some application and users only access some application without any knowledge about infrastracture underneath.

Re what should be the default: @floreks, I think that default config should be secure. Security is always highest priority. We can explore some creative ideas here on what to do. We could do read-only secure setup by default and use config maps for configuring it into unsecure one. WDYT?

IMHO current config is still secure for some use cases and insecure for others. It will be hard to target every use case and make everyone happy. Right now we are discussing the case where cluster is shared among users and namespaces are shared between group of people. So trusted and untrusted users have direct access to the cluster.

If it was only up to me then we could be deploying dashboard with minimal set of privileges by default but I'm concerned about our users here. Messages that are known and understandable for us do not have to be for all. Number of issues created after kubernetes 1.6 release about "not being able to access dashboard" proves that. Even so the error message was simple many people had no idea how to deal with it. That's why we've decided to simplify deployment for beginners.

I think that we have to find some solution that will work for both group of people I have described in my previous comment.

@cheld
Copy link
Contributor

cheld commented Jun 14, 2017

The amount of issues that are dropped in Dashboard, but are related to Kubernetes setup ist really frustrating. It started 1.5 with kubeadm and got worse with 1.6.

I think we can split the installation guide in a single tenant and a multi-tenant setup.

@andybrucenet
Copy link

To add to this - my current "solution" is not to run the dashboard natively (e.g. default settings) at all. Instead, I run a separate container for each of my users. This user-specific dashboard container runs as a service account which I create to "shadow" the actual user. I use RBAC policies to keep User A from accessing the dashboard that runs on behalf of User B.

Net effect - for our small team this works OK for now. At least I can publish a dashboard experience for each user. Of course, this loses for any larger teams.

Anyone is interested in the setup...just do a reply and I can share policy / setup files with you. Pretty straightforward once I thought through the process.

@pawelprazak
Copy link

pawelprazak commented Aug 21, 2017

@andybrucenet we might be interested in setting up a dashboard per team, so if I could trouble you for an example how your setup works, that would be very much appreciated

@marceloavan
Copy link

marceloavan commented Aug 23, 2017

@andybrucenet I think your use case could be useful to other teams, so would be great if you share your setup

@andybrucenet
Copy link

Hello @pawelprazak and @marceloavan - I have finally gotten a rudimentary article on my approach. See https://www.softwareab.net/wordpress/kubernetes-dashboard-authentication-isolation/ which has a link to the github repo with example files.

@maciaszczykm
Copy link
Member

Closing as stale.

anvithks pushed a commit to anvithks/k8s-dashboard that referenced this issue Sep 27, 2021
Fixed the issue with modify user. Removed validation for tenant
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests