-
Notifications
You must be signed in to change notification settings - Fork 56
Tiller module #25
Tiller module #25
Conversation
.circleci/config.yml
Outdated
| GRUNTWORK_INSTALLER_VERSION: v0.0.21 | ||
| TERRATEST_LOG_PARSER_VERSION: v0.13.13 | ||
| KUBERGRUNT_VERSION: v0.3.6 | ||
| KUBERGRUNT_VERSION: v0.3.8-alpha1 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
REMINDER: don't forget to set this to a real release.
|
|
||
| **Note**: You need to be an administrator to run this command. Specifically, this should be done with the same `kubectl` | ||
| context as the one used to deploy Tiller. You can use the `--kube-context` option to use a different context from the | ||
| default. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is now irrelevant, because upgrades can be done by modifying the vars to the k8s-tiller module call.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Just curious: This is done by upgrading the tiller_image_version?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yup that is correct. Updating tiller_image_version will trigger a new rollout of the Deployment resource and the Pods will be restarted.
| # Then use that CA to generate server TLS certs | ||
| kubergrunt tls gen --namespace ${module.tiller_namespace.name} --ca-secret-name ${local.tls_ca_secret_name} --ca-namespace kube-system --secret-name ${local.tls_secret_name} --secret-label gruntwork.io/tiller-namespace=${var.tiller_namespace} --secret-label gruntwork.io/tiller-credentials=true --secret-label gruntwork.io/tiller-credentials-type=server --tls-subject-json '${jsonencode(var.tls_subject)}' --tls-private-key-algorithm ${var.private_key_algorithm} ${local.tls_algorithm_config} ${local.kubectl_config_options} | ||
| EOF | ||
| } |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
These long string kubergrunt calls are sad, but I am opening a discussion to open source package-terraform-utilities so that we can do the esc_newl trickery in the EKS module: https://github.com/gruntwork-io/terraform-aws-eks/blob/master/examples/eks-cluster-with-supporting-services/core-services/main.tf#L126
rileykarson
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Mostly just comment suggestions / questions
|
|
||
| **Note**: You need to be an administrator to run this command. Specifically, this should be done with the same `kubectl` | ||
| context as the one used to deploy Tiller. You can use the `--kube-context` option to use a different context from the | ||
| default. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Just curious: This is done by upgrading the tiller_image_version?
main.tf
Outdated
| namespace = "${module.tiller_namespace.name}" | ||
| tiller_image_version = "${var.tiller_version}" | ||
|
|
||
| # Kubergrunt will store the private key under tls.pem |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
| # Kubergrunt will store the private key under tls.pem | |
| # kubergrunt stores the private key in the current directory as tls.pem |
This reads a bit like a magic var to me right now- more imperative text & explicitly stating it's the current dir helps my understanding. (Unless it's wrong, of course)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is actually the name of the key in the kubernetes Secret, which is mounted on to the container in a directory (hence, the use of the word filename). I updated to:
# Kubergrunt will store the private key under the key "tls.pem" in the corresponding Secret resource, which will be
# accessed as a file when mounted into the container.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks for clarifying!
main.tf
Outdated
| dependencies = ["${null_resource.tiller_tls_certs.id}"] | ||
| } | ||
|
|
||
| # We use kubergrunt to wait for Tiller to be deployed. Any resources that depend on this can assume Tiller is |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Should we eliminate the first bit?
We use kubergrunt to wait for Tiller to be deployed.
It feels like we're repeating ourselves, since it's written right below. I'm a lot more interested in why, which you cover in the second sentence
Any resources that depend on this can assume Tiller is successfully deployed and up at that point.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Makes sense. Updated to:
# The Deployment resources created in the module call to `k8s-tiller` will be complete creation before the rollout is
# complete. We use kubergrunt here to wait for the deployment to complete, so that when this resource is done creating,
# any resources that depend on this can assume Tiller is successfully deployed and up at that point.
To be more explicit about the why.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
sgtm
modules/k8s-tiller/README.md
Outdated
|
|
||
| * See the [root README](/README.md) for instructions on using Terraform modules. | ||
| * This module uses [the `kubernetes` provider](https://www.terraform.io/docs/providers/kubernetes/index.html). | ||
| * See the [examples](/examples) folder for example usage. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
| * See the [examples](/examples) folder for example usage. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Updating this to point to the root example:
* See [the example at the root of the repo](/) for example usage.
| # resources backing the values in the dependencies list. | ||
| # --------------------------------------------------------------------------------------------------------------------- | ||
|
|
||
| resource "null_resource" "dependency_getter" { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Is this a variant of https://github.com/gruntwork-io/terraform-google-sql/blob/master/modules/cloud-sql/main.tf#L113-L120, or am I missing something? If that's the case- that they're duplicated- we should use a consistent implementation + naming scheme across modules.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is the same thing, but I've been using this implementation in this repo for a while...
The other implementation is probably more correct though so I will go through and update the usage.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actually, I like using a list input as opposed to string, as it makes the resulting input more readable when you have multiple dependencies. Any objection to standardizing to a list input for this? @autero1
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
(to be clear, I don't mind which we choose, I just prefer that they're the same and we don't need to block on this)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Standardizing to a list input sounds good!
Co-Authored-By: yorinasub17 <430092+yorinasub17@users.noreply.github.com>
|
OK reverted back to using |
|
Ok build passed. Merging and releasing! |
Depends on gruntwork-io/kubergrunt#44
This implements the idea mentioned in slack. Specifically, instead of using
kubergruntfor the entire helm deployment process, we use it for specific functionality to support the deployment, and then rely on thekubernetesprovider to manage the underlyingDeploymentresource. This has a number of benefits:kubergrunt.kubergruntfunctionalities to use. For example, one can replace TLS management with a different process (e.gcert-manager), but still usekubergruntfor waiting for helm deployment.destroycan now be done without setting up thehelmclient.Note that as a consequence of this, the examples are more complicated because there are multiple
kubergruntcalls for each phase as opposed to just one. This is a tradeoff of making the unhappy path easier to manage, at the expense of complexity for the happy path.That said, I think in the long run this is more production ready because it handles updates and changes much better.