Skip to content
This repository was archived by the owner on Dec 16, 2020. It is now read-only.
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
2 changes: 1 addition & 1 deletion .circleci/config.yml
Original file line number Diff line number Diff line change
Expand Up @@ -5,7 +5,7 @@ defaults: &defaults
environment:
GRUNTWORK_INSTALLER_VERSION: v0.0.21
TERRATEST_LOG_PARSER_VERSION: v0.13.13
KUBERGRUNT_VERSION: v0.3.6
KUBERGRUNT_VERSION: v0.3.8
HELM_VERSION: v2.12.2
MODULE_CI_VERSION: v0.13.12
TERRAFORM_VERSION: 0.11.11
Expand Down
3 changes: 3 additions & 0 deletions .gitignore
Original file line number Diff line number Diff line change
Expand Up @@ -21,6 +21,9 @@ build/
*/build/
out/

# Module artifacts
os.txt

# Go best practices dictate that libraries should not include the vendor directory
vendor

Expand Down
22 changes: 16 additions & 6 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -39,16 +39,26 @@ This repo provides a Gruntwork IaC Package and has the following folder structur
all the security best practices.
* [modules](/modules): This folder contains the main implementation code for this Module, broken down into multiple
standalone Submodules.

The primary module is:

* [k8s-tiller](/modules/k8s-tiller): Deploy Tiller with all the security features turned on. This includes using
`Secrets` for storing state and enabling TLS verification.

The deployed Tiller requires TLS certificate key pairs to operate. Additionally, clients will each need to their
own TLS certificate key pairs to authenticate to the deployed Tiller instance. This is based on [kubergrunt model of
deploying helm](https://github.com/gruntwork-io/kubergrunt/blob/master/HELM_GUIDE.md).

There are also several supporting modules that help with setting up the deployment:

* [k8s-namespace](/modules/k8s-namespace): Provision a Kubernetes `Namespace` with a default set of RBAC roles.
* [k8s-namespace-roles](/modules/k8s-namespace-roles): Provision a default set of RBAC roles to use in a `Namespace`.
* [k8s-service-account](/modules/k8s-service-account): Provision a Kubernetes `ServiceAccount`.

* [examples](/examples): This folder contains examples of how to use the Submodules. The [example root
README](/examples/README.md) provides a quickstart guide on how to use the Submodules in this Module.
* [test](/test): Automated tests for the Submodules and examples.

The following submodules are available in this module:

- [k8s-namespace](/modules/k8s-namespace): Provision a Kubernetes `Namespace` with a default set of RBAC roles.
- [k8s-namespace-roles](/modules/k8s-namespace-roles): Provision a default set of RBAC roles to use in a `Namespace`.
- [k8s-service-account](/modules/k8s-service-account): Provision a Kubernetes `ServiceAccount`.


## What is Kubernetes?

Expand Down
72 changes: 29 additions & 43 deletions examples/k8s-tiller-minikube/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -44,8 +44,8 @@ respective repositories for how to deploy Tiller on those platforms. <!-- TODO:

## Installing necessary tools

In addition to `terraform`, this guide uses `kubergrunt` to manage the deployment of Tiller. You can read more about the
decision behind this approach in [the Appendix](#appendix-a-why-kubergrunt) of this guide.
In addition to `terraform`, this guide uses `kubergrunt` to manage TLS certificates for the deployment of Tiller. You
can read more about the decision behind this approach in [the Appendix](#appendix-a-why-kubergrunt) of this guide.

This means that your system needs to be configured to be able to find `terraform`, `kubergrunt`, and `helm` client
utilities on the system `PATH`. Here are the installation guide for each:
Expand Down Expand Up @@ -76,7 +76,7 @@ Tiller! To deploy Tiller, we will use the example Terraform code at the root of
- `terraform apply`
- Fill in the required variables based on your needs. <!-- TODO: show example inputs here -->

The Terraform code creates a few resources before deploying Tiller using `kubergrunt`:
The Terraform code creates a few resources before deploying Tiller:

- A Kubernetes `Namespace` (the `tiller-namespace`) to house the Tiller instance. This namespace is where all the
Kubernetes resources that Tiller needs to function will live. In production, you will want to lock down access to this
Expand All @@ -89,32 +89,27 @@ The Terraform code creates a few resources before deploying Tiller using `kuberg
`tiller-namespace` and the `resource-namespace`, so that it can:
- Manage its own resources in the `tiller-namespace`, where the Tiller metadata (e.g release tracking information) will live.
- Manage the resources deployed by helm charts in the `resource-namespace`.
- Using `kubergrunt`, generate a TLS CA certificate key pair and a set of signed certificate key pairs for the server
and the client. These will then be uploaded as `Secrets` on the Kubernetes cluster.

Then it will feed the names of the created resources into the `kubergrunt helm deploy` command. As part of the
deployment, `kubergrunt` will:
These resources are then passed into the `k8s-tiller` module where the Tiller `Deployment` resources will be created.
Once the resources are applied to the cluster, this will wait for the Tiller `Deployment` to roll out the `Pods` using
`kubergrunt helm wait-for-tiller`.

- Create a new TLS certificate key pair to use as the CA and upload it to Kubernetes as a `Secret` in the `kube-system`
namespace.
- Using the generated CA TLS certificate key pair, create a signed TLS certificate key pair to use to identify the
Tiller server and upload it to Kubernetes as a `Secret` in the `tiller-namespace`.
- Deploy Tiller with the following configurations turned on:
- TLS verification
- `Secrets` as the storage engine
- Provisioned in the `tiller-namespace` with the service account as the `tiller-service-account`
Finally, to allow you to use `helm` right away, this code also sets up the local `helm` client. This involves:

- Grant access to the provided RBAC entity and configure the local helm client to use those credentials:
- Using the CA TLS certificate key pair, create a signed TLS certificate key pair to use to identify the client.
- Upload the certificate key pair to the `tiller-namespace`.
- Grant the RBAC entity access to:
- Get the client certificate `Secret` (`kubergrunt helm configure` uses this to install the client certificate
key pair locally)
- Get and List pods in `tiller-namespace` (the `helm` client uses this to find the Tiller pod)
- Create a port forward to the Tiller pod (the `helm` client uses this to make requests to the Tiller pod)
- Using the CA TLS certificate key pair, create a signed TLS certificate key pair to use to identify the client.
- Upload the certificate key pair to the `tiller-namespace`.
- Grant the RBAC entity access to:
- Get the client certificate `Secret` (`kubergrunt helm configure` uses this to install the client certificate
key pair locally)
- Get and List pods in `tiller-namespace` (the `helm` client uses this to find the Tiller pod)
- Create a port forward to the Tiller pod (the `helm` client uses this to make requests to the Tiller pod)

- Install the client certificate key pair to the helm home directory so the client can use it.
- Install the client certificate key pair to the helm home directory so the client can use it.

You should now have a working Tiller deployment with your helm client configured to access it.
So let's verify that in the next step!
At the end of the `apply`, you should now have a working Tiller deployment with your `helm` client configured to access
it. So let's verify that in the next step!


## Verify Tiller Deployment
Expand Down Expand Up @@ -177,20 +172,6 @@ kubergrunt helm configure --tiller-namespace NAMESPACE_OF_TILLER --rbac-group de
At the end of this, your users should have the same helm client setup as above.


## Upgrading Deployed Tiller

At some point in the lifetime of the Tiller deployment, you will want to upgrade it. You can upgrade the deployed Tiller
instance using the helm client with the following command:

```
helm init --upgrade --tiller-namespace TILLER_NAMESPACE
```

**Note**: You need to be an administrator to run this command. Specifically, this should be done with the same `kubectl`
context as the one used to deploy Tiller. You can use the `--kube-context` option to use a different context from the
default.
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is now irrelevant, because upgrades can be done by modifying the vars to the k8s-tiller module call.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Just curious: This is done by upgrading the tiller_image_version?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yup that is correct. Updating tiller_image_version will trigger a new rollout of the Deployment resource and the Pods will be restarted.



## Appendix A: Why kubergrunt?

This Terraform example is not idiomatic Terraform code in that it relies on an external binary, `kubergrunt` as opposed
Expand All @@ -204,16 +185,21 @@ to implementing the functionalities using pure Terraform providers. This approac
`destroy`.

That said, we decided to use this approach because of limitations in the existing providers to implement the
functionalities here in pure Terraform code:
functionalities here in pure Terraform code.

`kubergrunt` fulfills the role of generating and managing TLS certificate key pairs using Kubernetes `Secrets` as a
database. This allows us to deploy Tiller with TLS verification enabled. We could instead use the `tls` and `kubernetes`
providers in Terraform, but this has a few drawbacks:

- The Helm provider does not have [a resource that manages
Tiller](https://github.com/terraform-providers/terraform-provider-helm/issues/134).
- The [TLS provider](https://www.terraform.io/docs/providers/tls/index.html) stores the certificate key pairs in plain
text into the Terraform state.
- The Kubernetes Secret resource in the provider [also stores the value in plain text in the Terraform
state](https://www.terraform.io/docs/providers/kubernetes/r/secret.html).
- The grant and configure workflows are better suited as CLI tools than in Terraform.

Note that [we intend to implement a pure Terraform version of this when the Helm provider is
updated](https://github.com/gruntwork-io/terraform-kubernetes-helm/issues/13), but we plan to continue to maintain the
`kubergrunt` works around this by generating the TLS certs and storing them in Kubernetes `Secrets` directly. In this
way, the generated TLS certs never leak into the Terraform state as they are referenced by name when deploying Tiller as
opposed to by value.

Note that we intend to implement a pure Terraform version of this functionality, but we plan to continue to maintain the
`kubergrunt` approach for folks who are wary of leaking secrets into Terraform state.
92 changes: 76 additions & 16 deletions main.tf
Original file line number Diff line number Diff line change
Expand Up @@ -22,7 +22,7 @@ provider "kubernetes" {
module "tiller_namespace" {
# When using these modules in your own templates, you will need to use a Git URL with a ref attribute that pins you
# to a specific version of the modules, such as the following example:
# source = "git::git@github.com:gruntwork-io/terraform-kubernetes-helm.git//modules/k8s-namespace?ref=v0.1.0"
# source = "git::git@github.com:gruntwork-io/terraform-kubernetes-helm.git//modules/k8s-namespace?ref=v0.3.0"
source = "./modules/k8s-namespace"

name = "${var.tiller_namespace}"
Expand All @@ -31,7 +31,7 @@ module "tiller_namespace" {
module "resource_namespace" {
# When using these modules in your own templates, you will need to use a Git URL with a ref attribute that pins you
# to a specific version of the modules, such as the following example:
# source = "git::git@github.com:gruntwork-io/terraform-kubernetes-helm.git//modules/k8s-namespace?ref=v0.1.0"
# source = "git::git@github.com:gruntwork-io/terraform-kubernetes-helm.git//modules/k8s-namespace?ref=v0.3.0"
source = "./modules/k8s-namespace"

name = "${var.resource_namespace}"
Expand All @@ -40,7 +40,7 @@ module "resource_namespace" {
module "tiller_service_account" {
# When using these modules in your own templates, you will need to use a Git URL with a ref attribute that pins you
# to a specific version of the modules, such as the following example:
# source = "git::git@github.com:gruntwork-io/terraform-kubernetes-helm.git//modules/k8s-service-account?ref=v0.1.0"
# source = "git::git@github.com:gruntwork-io/terraform-kubernetes-helm.git//modules/k8s-service-account?ref=v0.3.0"
source = "./modules/k8s-service-account"

name = "${var.service_account_name}"
Expand All @@ -63,17 +63,88 @@ module "tiller_service_account" {
}
}

# ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
# GENERATE TLS CERTIFICATES FOR USE WITH TILLER
# This will use kubergrunt to generate TLS certificates, and upload them as Kubernetes Secrets that can then be used by
# Tiller.
# ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

resource "null_resource" "tiller_tls_certs" {
provisioner "local-exec" {
command = <<-EOF
# Generate CA TLS certs
kubergrunt tls gen --ca --namespace kube-system --secret-name ${local.tls_ca_secret_name} --secret-label gruntwork.io/tiller-namespace=${var.tiller_namespace} --secret-label gruntwork.io/tiller-credentials=true --secret-label gruntwork.io/tiller-credentials-type=ca --tls-subject-json '${jsonencode(var.tls_subject)}' --tls-private-key-algorithm ${var.private_key_algorithm} ${local.tls_algorithm_config} ${local.kubectl_config_options}

# Then use that CA to generate server TLS certs
kubergrunt tls gen --namespace ${module.tiller_namespace.name} --ca-secret-name ${local.tls_ca_secret_name} --ca-namespace kube-system --secret-name ${local.tls_secret_name} --secret-label gruntwork.io/tiller-namespace=${var.tiller_namespace} --secret-label gruntwork.io/tiller-credentials=true --secret-label gruntwork.io/tiller-credentials-type=server --tls-subject-json '${jsonencode(var.tls_subject)}' --tls-private-key-algorithm ${var.private_key_algorithm} ${local.tls_algorithm_config} ${local.kubectl_config_options}
EOF
}
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

These long string kubergrunt calls are sad, but I am opening a discussion to open source package-terraform-utilities so that we can do the esc_newl trickery in the EKS module: https://github.com/gruntwork-io/terraform-aws-eks/blob/master/examples/eks-cluster-with-supporting-services/core-services/main.tf#L126

}

# ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
# DEPLOY TILLER
# ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

module "tiller" {
# When using these modules in your own templates, you will need to use a Git URL with a ref attribute that pins you
# to a specific version of the modules, such as the following example:
# source = "git::git@github.com:gruntwork-io/terraform-kubernetes-helm.git//modules/k8s-tiller?ref=v0.3.0"
source = "./modules/k8s-tiller"

tiller_service_account_name = "${module.tiller_service_account.name}"
tiller_service_account_token_secret_name = "${module.tiller_service_account.token_secret_name}"
tiller_tls_secret_name = "${local.tls_secret_name}"
namespace = "${module.tiller_namespace.name}"
tiller_image_version = "${var.tiller_version}"

# Kubergrunt will store the private key under the key "tls.pem" in the corresponding Secret resource, which will be
# accessed as a file when mounted into the container.
tiller_tls_key_file_name = "tls.pem"

dependencies = ["${null_resource.tiller_tls_certs.id}"]
}

# The Deployment resources created in the module call to `k8s-tiller` will be complete creation before the rollout is
# complete. We use kubergrunt here to wait for the deployment to complete, so that when this resource is done creating,
# any resources that depend on this can assume Tiller is successfully deployed and up at that point.
resource "null_resource" "wait_for_tiller" {
provisioner "local-exec" {
command = "kubergrunt helm wait-for-tiller --tiller-namespace ${module.tiller_namespace.name} --tiller-deployment-name ${module.tiller.deployment_name} --expected-tiller-version ${var.tiller_version}"
}
}

# ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
# CONFIGURE OPERATOR HELM CLIENT
# ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

resource "null_resource" "grant_and_configure_helm" {
count = "${var.configure_helm}"

provisioner "local-exec" {
command = <<-EOF
kubergrunt helm grant --tiller-namespace ${module.tiller_namespace.name} ${local.kubectl_config_options} --tls-subject-json '${jsonencode(var.client_tls_subject)}' ${local.configure_args}

kubergrunt helm configure --helm-home ${local.helm_home_with_default} --tiller-namespace ${module.tiller_namespace.name} --resource-namespace ${module.resource_namespace.name} ${local.kubectl_config_options} ${local.configure_args}
EOF
}

depends_on = ["null_resource.wait_for_tiller"]
}

# ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
# COMPUTATIONS
# These locals compute various useful information used throughout this Terraform module.
# ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

locals {
helm_home_with_default = "${var.helm_home == "" ? pathexpand("~/.helm") : var.helm_home}"
kubectl_config_options = "${var.kubectl_config_context_name != "" ? "--kubectl-context-name ${var.kubectl_config_context_name}" : ""} ${var.kubectl_config_path != "" ? "--kubeconfig ${var.kubectl_config_path}" : ""}"

tls_ca_secret_name = "${var.tiller_namespace}-namespace-tiller-ca-certs"
tls_secret_name = "tiller-certs"

tls_algorithm_config = "${var.private_key_algorithm == "ECDSA" ? "--tls-private-key-ecdsa-curve ${var.private_key_ecdsa_curve}" : "--tls-private-key-rsa-bits ${var.private_key_rsa_bits}"}"

undeploy_args = "${var.force_undeploy ? "--force" : ""} ${var.undeploy_releases ? "--undeploy-releases" : ""}"
helm_home_with_default = "${var.helm_home == "" ? pathexpand("~/.helm") : var.helm_home}"

configure_args = "${
var.helm_client_rbac_user != "" ? "--rbac-user ${var.helm_client_rbac_user}"
Expand All @@ -82,14 +153,3 @@ locals {
: ""
}"
}

resource "null_resource" "tiller" {
provisioner "local-exec" {
command = "kubergrunt helm deploy ${local.kubectl_config_options} --service-account ${module.tiller_service_account.name} --resource-namespace ${module.resource_namespace.name} --tiller-namespace ${module.tiller_namespace.name} --tls-private-key-algorithm ${var.private_key_algorithm} ${local.tls_algorithm_config} --tls-subject-json '${jsonencode(var.tls_subject)}' --client-tls-subject-json '${jsonencode(var.client_tls_subject)}' --helm-home ${local.helm_home_with_default} ${local.configure_args} --tiller-version ${var.tiller_version}"
}

provisioner "local-exec" {
command = "kubergrunt helm undeploy ${local.kubectl_config_options} --helm-home ${local.helm_home_with_default} --tiller-namespace ${module.tiller_namespace.name} ${local.undeploy_args}"
when = "destroy"
}
}
4 changes: 2 additions & 2 deletions modules/k8s-namespace-roles/main.tf
Original file line number Diff line number Diff line change
Expand Up @@ -20,8 +20,8 @@ terraform {
# ---------------------------------------------------------------------------------------------------------------------

resource "null_resource" "dependency_getter" {
provisioner "local-exec" {
command = "echo ${length(var.dependencies)}"
triggers = {
instance = "${join(",", var.dependencies)}"
}
}

Expand Down
4 changes: 2 additions & 2 deletions modules/k8s-namespace/main.tf
Original file line number Diff line number Diff line change
Expand Up @@ -21,8 +21,8 @@ terraform {
# ---------------------------------------------------------------------------------------------------------------------

resource "null_resource" "dependency_getter" {
provisioner "local-exec" {
command = "echo ${length(var.dependencies)}"
triggers = {
instance = "${join(",", var.dependencies)}"
}
}

Expand Down
4 changes: 2 additions & 2 deletions modules/k8s-service-account/main.tf
Original file line number Diff line number Diff line change
Expand Up @@ -21,8 +21,8 @@ terraform {
# ---------------------------------------------------------------------------------------------------------------------

resource "null_resource" "dependency_getter" {
provisioner "local-exec" {
command = "echo ${length(var.dependencies)}"
triggers = {
instance = "${join(",", var.dependencies)}"
}
}

Expand Down
7 changes: 7 additions & 0 deletions modules/k8s-service-account/outputs.tf
Original file line number Diff line number Diff line change
Expand Up @@ -4,3 +4,10 @@ output "name" {

depends_on = ["kubernetes_role_binding.service_account_role_binding"]
}

output "token_secret_name" {
description = "The name of the secret that holds the default ServiceAccount token that can be used to authenticate to the Kubernetes API."
value = "${kubernetes_service_account.service_account.default_secret_name}"

depends_on = ["kubernetes_role_binding.service_account_role_binding"]
}
Loading