-
Notifications
You must be signed in to change notification settings - Fork 4.2k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Authentication guidelines #574
Comments
@luxas Can you chime in? |
@cescoferraro When you run the Dashboard UI inside your cluster, the UI should attempt to securely connect to the master using serviceaccounts. This is our default behaviour. |
Re securing the Dashboard UI itself: at the moment you should secure it by accessing it via service proxy or through firewalled private network. We're working on IAM/auth support in the meantime. |
@bryk FYI, this PR will make local docker users more happy: kubernetes/kubernetes#23550 In this case @cescoferraro, I think you should set
as @antoineco pointed out. |
I got it all working now. I was having problems with the certificates. I was signing those with the cloudfaressl tool, now I create all with easy_rsa and leave the cffssl toolkit to generate the etcd certificates. There is something to do with the fact that kubelet expect not encrypted certificates files. Anyway. The point I want to raise here, is that you MUST have the --basic-auth-file flag on the kube-apiserver call to authenticate the dashboard without a service proxy( I am using romulus btw, and it is great!). Token authentication is not enough because the browser does not know how to ask for a token so it lets everybody in, but it knows how to ask for user/admin. And its not just for the dashboard. Kubedash behaves the same way, and the documentation also lacks this info. This leave newbie users lost like me, especially on bare-metal installations. The serviceaccounts default beahvior is not explicitly said leaving valuable info like what @luxas just showed us is hidden under issues comments. I can submit a PR with some 101 ABC guide for kube-system app on bare metal if that is something useful for the repo. |
@cescoferraro Yes, please start writing a doc about this and I'll review and maybe fill in some details |
@cescoferraro We're aware of the issues on the Dashboard UI side. This will be fixed. No concrete ETA though. |
It would be good if the setting were made via kubeconfig file, my cluster is configured for client certificate for communication with apiserver, dasboard does not support client certificate. |
this is related |
It actually should support kubeconfig files. It works same way as kubectl. I.e., you should mount the config file in a specific directory and that's all. Can you verify this? |
where to put the kubeconfig file? |
All right, I've done some investigation on this. Currently kubeconfig files do not work, but I have a fix for this problem that will go into a next release. After the release you'll be able to use your kubeconfig file from kubectl to use the UI. In a cluster you can mount it to |
More about kubeconfig: http://kubernetes.io/docs/user-guide/sharing-clusters/ |
@luxas
API server runs with:
Arguments of the dashboard:
Thanks |
I did some more testing:
Is logged when accessing: https://host/api/v1/proxy/namespaces/kube-system/services/kubernetes-dashboard/#/replicationcontrollers i validated the access tokens:
Should the dashboard not use the token for authentication? Why is the bad certificate thrown? I don't understand currently whats happening. Greets Update:
Token is correct and ca.crt too. |
Is The xxx ip privarem or public? It should be public. I use kubedns To work this out |
This is the public IP-Address. In fact it's the IP-Address of the LB in front of the API. Update: If adjusted to certificate to contain the ip address in the error, but I still getting the same error:
|
Sorry I meant private. So kubedns knows where that the api server is at kubernetes.local which is default place the dashboard will look for the apiserver. |
I played a bit more around:
After that I now get:
kubectl describe pod kubernetes-dashboard-kmfyu --namespace=kube-system
Secrets are mounted to /var/run/secrets/kubernetes.io/serviceaccount. If I run curl in the container:
So everything should be setup right? But why dashboard can not connect to the API? |
You should not need to add the certificates to the image. This should be done by service accounts at pod creation. |
I've spent a while trying to secure this service and am stuck... I've got some containers that run in the cluster that can run users custom code, so they will be able to talk to the backend kube-dashboard ip if I load it. kube-dashboard wants to use the service account creds rather then basic auth when enabled on the api letting anyone in. If I try and override it by specifying api server by --apiserver-host it fails to auth. If I do the env KUBECONFIG thing it doesn't work. I use to be able to override the ca like so in 1.0.x If I give up and add an apache proxy in front to do the auth (not great as it still uses the service account, but still better then no auth) then I can get it to proxy, but can't seem to bind the dashboard only to 127.0.0.1 so its easy to bypass the auth. So, I'm stuck not being able to deply the dashboard. Any ideas? |
When did this work? I think that this never worked. We always had scratch images. Did you use an image from a different source? |
@unixwitch @andybrucenet this privilege escalation is so bad it would deserve its own Issue in order to catch the attention of the maintainers. |
There you have it: #964 @unixwitch NodePort has been removed. Authentication is typically handled by apiserver. However, it is correct that Dashboard is using service account to access apiserver which is kind of a privelige escalation. |
That reduces the attack surface somewhat, but in the current release version it's still possible for any pod to escalate privileges - or, in an environment like ours where pod IP addresses are routed on our internal corporate network, by anyone who can connect to the pod - right? Until changes from #964 are in the release version linked from https://github.com/kubernetes/dashboard, I still feel this should be clearly noted on that page. I had no idea this problem existed until one day I wondered how Dashboard did authentication, and found out it didn't (and immediately removed it from our cluster). |
I have to check how kubectl handles token argument because there might be misunderstanding here of how we handle authentication and authorization. I can guarantee that if everything is set up correctly then there is no way of escalating the privileges. I'll try to check later or tomorrow if kubectl token argument adds header that we require on every request to dashboard. However you access dashboard in order to handle authorization we require You can actually revoke all permissions for dashboard service account except health check and fully rely on authorization header. Someone described on our sig-ui slack channel that it works fine together with auth proxy in front of dashboard. I'm writing from my phone so can't check everything right now but I'll get back to you with more details. Sent from my LGE Pixel using FastHub |
I think the concern was that every pod in the overlay can ping the dashboad pod and can "press the deploy an app button" using curl or something else. In addition they spread the overlay network to the hole company network... Well I guess this is correct. you might want to use a overlay network with namespace isolation |
Network isolation is a separate concern and cluster operator should take care of that. Anyway if admin would revoke all privileges to dashboard SA and will rely on auth header then pod to pod communication won't be an issue because dashboard SA won't have privileges to do anything. Sent from my LGE Pixel using FastHub |
But how would I know that I need to do that, since this behaviour is not documented? I'm not saying anything in Dashboard itself should change, only that this default behaviour should be noted prominently in the documentation, because it's neither obvious nor expected. (I see that this is being fixed properly with delegated authorization, which is great, but that's not related to my point here until it becomes the default.)
Great, but the provided manifest doesn't do that, it grants the I am not trying to be a pain here, but as currently implemented, deploying Dashboard in a cluster will completely bypass any cluster authorization configuration (RBAC/ABAC/webhook/etc.), and there is no way for cluster administrators to know that this is the case. I don't really understand why this isn't seen as a problem. Is there something I'm missing here? If an |
The default dashboard manifest should not expose the cluster this way, it should only deploy the dashboard. If the dashboard wants to have cluster-admin power itself, that should be done as a separate step, with clear instructions/warnings that any user accessing the dashboard will be able to act with cluster-admin permissions via the dashboard, and that network isolation steps are necessary to prevent access from other pods via the pod or service IPs. |
@unixwitch @liggitt Of course I understand your point of view. We will update our documentation and notify users loud and clear that using this method to deploy dashboard will not be secure in some scenarios. On the other hand from my perspective we have 3 groups of people that use dashboard.
From what we see recently looking at issues there are many more people in the first group. After kubernetes 1.6 was released and new restrictions were added to default RBACs we've had plenty of similar issues created related to that. We've decided then that we should provide as simple as possible mechanism of deploying and running dashboard which by default will offer full cluster admin privileges. From time to time we have also this kind of issues (and they are justified) where people don't know how to use recently added authorization mechanism or are doing it wrong. It's on us because our documentation needs to be updated and we have to make some things clear. Regarding exposing dashboard with full privileges by default I don't agree that we should change that. I have already provided few reasons why and additionally there are many use cases in which this is the correct behavior. We have already removed exposing dashboard using the NodePort which increases security. We have to remember that people use kubernetes in many different ways and your use cases that would require more restrictive default setup is not the only one. Of course we could add more dashboard yamls with different security configs but then how can we know which config is generic enough and will work for everyone? Should we revoke all rights? Should we make it read only? Should we allow only access to default namespace? To sum up I think that we should update our documentation but leave current behavior as default one with remark that by default dashboard has full access to the cluster and user has to take care of security setup if default one is not good for him. |
Regarding issues described by @andybrucenet with There were some plans to make apiserver proxy requests to addons. Until something like this is implemented |
Insecure by default is a bad default. For better or worse, the manifest the dashboard provides is referenced in many places as "the way" to deploy the dashboard. It would be far better to make the default deployment just deploy the dashboard, and make the dashboard respond to a lack of API access by displaying options/instructions to the person accessing the dashboard:
That keeps the user experience intact but requires opt-in by cluster admins before opening up API access via the dashboard. |
I agree that this would improve overall security but on the other hand it would decrease simplicity of deploying and running dashboard. We would end up again with dozens of issues saying that users can't access dashboard. I think we can meet in the middle here. We can split dashboard deployment into 2 parts: application and rbac. Similar to what flannel does. Then we can prepare 2 rbac files: minimal and full access. This way in the beginning user has to choose what he needs instead of making this choice for him. With minimal rbac user will have to take care of authorization. |
Splitting the manifest is definitely a step in the right direction, sure. The "grant cluster-admin to the dashboard" manifest should not just be called "dashboard-rbac". It should be accompanied by a very clear warning that unless significant care is taken (with network isolation of the pod and service IPs, no exposure via ingress, etc), it defeats the point of having RBAC enabled in the cluster and allows anyone with access to the pod or service IP network cluster-admin permissions. If dashboard were not part of the kubernetes org, or was not enabled by default in deployments, it would have more freedom to make compromises in this area, but as-is, I expect the dashboard to preserve cluster security by default. kubernetes/kubernetes#23201 reported this issue in 1.1, but the dashboard was re-enabled by default in 1.2 without addressing this issue. With RBAC enabled in GCE/GKE in 1.6, those deployments should reconsider whether it's appropriate to give the dashboard cluster-admin permissions out of the box. |
What about the suggestion that the dashboard present information about how to give it API access in-app. That makes it pretty clear that there is nothing broken, and what the admin needs to do if they want the dashboard to have API access itself. |
@floreks @liggitt These are valuable comments, thanks!
|
Even read only can be a privilege escalation, imagine user with just access to namespace 'a' and guesses access of dashboard (or has read access to kube-system) and then through the dashboard they also have read access to namespace 'b', so while I think readonly access is a valid option it is not strong enough for a true secure config |
Yeah, correct. How about the kube-proxy solution that'd forward user's creds? This would be secure by default, wouldn't it? |
@bryk You mean the proposal in kubernetes/kubernetes#29714 ? (e.g. require some form of auth token unless we give a flag/env var to the dashboard called something like "UNSAFE_I_KNOW_WHAT_I_AM_DOING" in the spec, and actually implement one or two reference sidecar containers that are able to log in and proxy to the dashboard while adding this token) |
I think we should also remember that not everyone uses kubernetes cluster as a "playground" for users/devs. There are many use cases where only trusted set of people have access to the cluster. They are in charge of deploying some application and users only access some application without any knowledge about infrastracture underneath.
IMHO current config is still secure for some use cases and insecure for others. It will be hard to target every use case and make everyone happy. Right now we are discussing the case where cluster is shared among users and namespaces are shared between group of people. So trusted and untrusted users have direct access to the cluster. If it was only up to me then we could be deploying dashboard with minimal set of privileges by default but I'm concerned about our users here. Messages that are known and understandable for us do not have to be for all. Number of issues created after kubernetes 1.6 release about "not being able to access dashboard" proves that. Even so the error message was simple many people had no idea how to deal with it. That's why we've decided to simplify deployment for beginners. I think that we have to find some solution that will work for both group of people I have described in my previous comment. |
The amount of issues that are dropped in Dashboard, but are related to Kubernetes setup ist really frustrating. It started 1.5 with kubeadm and got worse with 1.6. I think we can split the installation guide in a single tenant and a multi-tenant setup. |
To add to this - my current "solution" is not to run the dashboard natively (e.g. default settings) at all. Instead, I run a separate container for each of my users. This user-specific dashboard container runs as a service account which I create to "shadow" the actual user. I use RBAC policies to keep User A from accessing the dashboard that runs on behalf of User B. Net effect - for our small team this works OK for now. At least I can publish a dashboard experience for each user. Of course, this loses for any larger teams. Anyone is interested in the setup...just do a reply and I can share policy / setup files with you. Pretty straightforward once I thought through the process. |
@andybrucenet we might be interested in setting up a dashboard per team, so if I could trouble you for an example how your setup works, that would be very much appreciated |
@andybrucenet I think your use case could be useful to other teams, so would be great if you share your setup |
Hello @pawelprazak and @marceloavan - I have finally gotten a rudimentary article on my approach. See https://www.softwareab.net/wordpress/kubernetes-dashboard-authentication-isolation/ which has a link to the github repo with example files. |
Closing as stale. |
Fixed the issue with modify user. Removed validation for tenant
Issue details
Environment
Steps to reproduce
i am deploying this kubernetes-dashboard-canary
Observed result
using the insecure port
- --apiserver-host=http://{{ .Env.MASTER_PRIVATE }}:8080
it works but I have no restriction to who access the dashboard.
using the secure port
- --apiserver-host=https://{{ .Env.MASTER_PRIVATE }}:6443
i get
Get https://10.134.19.15:6443/api/v1/replicationcontrollers: x509: failed to load system roots and no roots provided
as expected because the pod ip in not on the certificates I generated as the kube-apiserver confirms
Mar 24 22:59:32 master kube-apiserver[1298]: I0324 22:59:32.632002 1298 logs.go:41] http: TLS handshake error from 10.100.68.2:45592: remote error: bad certificate
I am using simple authentication to the api, I was expecting something like this for the dashboard as well. What the dashboard way of doing this?
Do I have to build a new docker-image copying the certificates to /etc/kubernetes/ssl/??
The text was updated successfully, but these errors were encountered: