-
Notifications
You must be signed in to change notification settings - Fork 90
Description
I'm having a problem with a working AWS EKS cluster context in click:
[Warning] Couldn't find/load context <<cluster-name>>, now no current context. Error: Failed to get config: Invalid context <<cluster-name>>. Each user must have either a token, a username AND password, or a client-certificate AND a client-key, or an auth-provider
This is the config:
apiVersion: v1
clusters:
- cluster:
certificate-authority-data: DATA+OMITTED
server: <<k8s control plane endpoint>>
name: <<cluster-aws-arn (AWS global ID)>>
contexts:
- context:
cluster: <<cluster-aws-arn (AWS global ID)>>
namespace: <<namespace>>
user: <<cluster-aws-arn (AWS global ID)>>
name: bci-dev-main
current-context: <<cluster-name>>
kind: Config
preferences: {}
users:
- name: <<cluster-aws-arn (AWS global ID)>>
user:
exec:
apiVersion: client.authentication.k8s.io/v1alpha1
args:
- token
- -i
- <<cluster-name>>
command: aws-iam-authenticator
env: null
The only thing omitted are essentially identifiers, there are no credentials, tokens, or client certs in the config. Instead this configuration is using aws-iam-authenticator, which pulls the authentication token out of AWS specific environment variables. The access to the cluster is protected via multi-factor auth, so to actually start executing commands with kubectl (or click), I'm using aws-vault, which essentially lets me authenticate, forces the multi-factor auth, and then executes the command with the correct env vars set (to be picked up by aws-iam-authenticator). The commands for kubectl/click look therefore like this:
aws-vault exec <<aws-vault config profile>> -- kubectl ...
This particular config was generated in the AWS recommended way with aws cli:
aws eks update-kubeconfig
Since this is part of the guides on EKS AWS, I expect other users to run into this problem.
I assume this also affects other AWS clusters set up in a similar way with eg. kops.