You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I'm using multi-account strategy in AWS and creating AWS resources with an assumed role. I would like also to assume this role by helm provider using exec plugin, but for some reason it doesn't work.
Hi @andrey-odeeo, could you please share how the AWS credentials are being supplied?
So the credentials of the main account from which I need to assume the account that helm should be using is supplied by exporting in the terminal the AWS_* variables. So basically it looks like following:
I had the same issue, it turns out aws_auth was not updated with the role I tried to use.
notice that aws eks get-token command will always return token, even if cluster doesn't exist.
I'm using multi-account strategy in AWS and creating AWS resources with an assumed role. I would like also to assume this role by helm provider using exec plugin, but for some reason it doesn't work.
Terraform, Provider, Kubernetes and Helm Versions
Affected Resource(s)
Terraform Configuration Files
Debug Output
Error: Kubernetes cluster unreachable: the server has asked for the client to provide credentials
Expected Behavior
Use assumed role and authenticate in EKS
Actual Behavior
Can't authenticate
Important Factoids
If I take the command and put in the same terminal where I run terraform plan, I receive the token
If I create a profile in ~/.aws/credentials and use --profile instead of --role-arn - it works
for example:
I also tried to pass environment directly using "env" block inside exec - it didn't help either.
The text was updated successfully, but these errors were encountered: