Skip to content
This repository was archived by the owner on Dec 16, 2020. It is now read-only.

Conversation

@yorinasub17
Copy link
Contributor

@yorinasub17 yorinasub17 commented May 1, 2019

Depends on gruntwork-io/kubergrunt#44

This implements the idea mentioned in slack. Specifically, instead of using kubergrunt for the entire helm deployment process, we use it for specific functionality to support the deployment, and then rely on the kubernetes provider to manage the underlying Deployment resource. This has a number of benefits:

  • This is much more robust to partial failures, as there are less things to roll back if a failure occurs in kubergrunt.
  • This is flexible for the consumer, as now it is easier to pick and choose specific kubergrunt functionalities to use. For example, one can replace TLS management with a different process (e.g cert-manager), but still use kubergrunt for waiting for helm deployment.
  • This is a step towards providing pure terraform versions of the TLS management, for those that are not state security sensitive.
  • destroy can now be done without setting up the helm client.

Note that as a consequence of this, the examples are more complicated because there are multiple kubergrunt calls for each phase as opposed to just one. This is a tradeoff of making the unhappy path easier to manage, at the expense of complexity for the happy path.

That said, I think in the long run this is more production ready because it handles updates and changes much better.

GRUNTWORK_INSTALLER_VERSION: v0.0.21
TERRATEST_LOG_PARSER_VERSION: v0.13.13
KUBERGRUNT_VERSION: v0.3.6
KUBERGRUNT_VERSION: v0.3.8-alpha1
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

REMINDER: don't forget to set this to a real release.


**Note**: You need to be an administrator to run this command. Specifically, this should be done with the same `kubectl`
context as the one used to deploy Tiller. You can use the `--kube-context` option to use a different context from the
default.
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is now irrelevant, because upgrades can be done by modifying the vars to the k8s-tiller module call.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Just curious: This is done by upgrading the tiller_image_version?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yup that is correct. Updating tiller_image_version will trigger a new rollout of the Deployment resource and the Pods will be restarted.

# Then use that CA to generate server TLS certs
kubergrunt tls gen --namespace ${module.tiller_namespace.name} --ca-secret-name ${local.tls_ca_secret_name} --ca-namespace kube-system --secret-name ${local.tls_secret_name} --secret-label gruntwork.io/tiller-namespace=${var.tiller_namespace} --secret-label gruntwork.io/tiller-credentials=true --secret-label gruntwork.io/tiller-credentials-type=server --tls-subject-json '${jsonencode(var.tls_subject)}' --tls-private-key-algorithm ${var.private_key_algorithm} ${local.tls_algorithm_config} ${local.kubectl_config_options}
EOF
}
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

These long string kubergrunt calls are sad, but I am opening a discussion to open source package-terraform-utilities so that we can do the esc_newl trickery in the EKS module: https://github.com/gruntwork-io/terraform-aws-eks/blob/master/examples/eks-cluster-with-supporting-services/core-services/main.tf#L126

Copy link
Contributor

@rileykarson rileykarson left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Mostly just comment suggestions / questions


**Note**: You need to be an administrator to run this command. Specifically, this should be done with the same `kubectl`
context as the one used to deploy Tiller. You can use the `--kube-context` option to use a different context from the
default.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Just curious: This is done by upgrading the tiller_image_version?

main.tf Outdated
namespace = "${module.tiller_namespace.name}"
tiller_image_version = "${var.tiller_version}"

# Kubergrunt will store the private key under tls.pem
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
# Kubergrunt will store the private key under tls.pem
# kubergrunt stores the private key in the current directory as tls.pem

This reads a bit like a magic var to me right now- more imperative text & explicitly stating it's the current dir helps my understanding. (Unless it's wrong, of course)

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is actually the name of the key in the kubernetes Secret, which is mounted on to the container in a directory (hence, the use of the word filename). I updated to:

  # Kubergrunt will store the private key under the key "tls.pem" in the corresponding Secret resource, which will be
  # accessed as a file when mounted into the container.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for clarifying!

main.tf Outdated
dependencies = ["${null_resource.tiller_tls_certs.id}"]
}

# We use kubergrunt to wait for Tiller to be deployed. Any resources that depend on this can assume Tiller is
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Should we eliminate the first bit?

We use kubergrunt to wait for Tiller to be deployed.

It feels like we're repeating ourselves, since it's written right below. I'm a lot more interested in why, which you cover in the second sentence

Any resources that depend on this can assume Tiller is successfully deployed and up at that point.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Makes sense. Updated to:

# The Deployment resources created in the module call to `k8s-tiller` will be complete creation before the rollout is
# complete. We use kubergrunt here to wait for the deployment to complete, so that when this resource is done creating,
# any resources that depend on this can assume Tiller is successfully deployed and up at that point.

To be more explicit about the why.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

sgtm


* See the [root README](/README.md) for instructions on using Terraform modules.
* This module uses [the `kubernetes` provider](https://www.terraform.io/docs/providers/kubernetes/index.html).
* See the [examples](/examples) folder for example usage.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
* See the [examples](/examples) folder for example usage.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Updating this to point to the root example:

* See [the example at the root of the repo](/) for example usage.

# resources backing the values in the dependencies list.
# ---------------------------------------------------------------------------------------------------------------------

resource "null_resource" "dependency_getter" {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is this a variant of https://github.com/gruntwork-io/terraform-google-sql/blob/master/modules/cloud-sql/main.tf#L113-L120, or am I missing something? If that's the case- that they're duplicated- we should use a consistent implementation + naming scheme across modules.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is the same thing, but I've been using this implementation in this repo for a while...

The other implementation is probably more correct though so I will go through and update the usage.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actually, I like using a list input as opposed to string, as it makes the resulting input more readable when you have multiple dependencies. Any objection to standardizing to a list input for this? @autero1

Copy link
Contributor

@rileykarson rileykarson May 1, 2019

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

(to be clear, I don't mind which we choose, I just prefer that they're the same and we don't need to block on this)

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Standardizing to a list input sounds good!

@yorinasub17
Copy link
Contributor Author

yorinasub17 commented May 2, 2019

OK reverted back to using dependencies as the dependency input. This should now be in parity with gruntwork-io/terraform-google-sql#32. Going to merge and release this once the build passes.

@yorinasub17
Copy link
Contributor Author

Ok build passed. Merging and releasing!

@yorinasub17 yorinasub17 merged commit 31a1e30 into master May 2, 2019
@yorinasub17 yorinasub17 deleted the yori-tiller-module branch May 2, 2019 16:32
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

4 participants