Skip to content
Closed
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
26 changes: 15 additions & 11 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -26,7 +26,9 @@ module "eks" {
source = "terraform-aws-modules/eks/aws"
cluster_name = "test-eks-cluster"
subnets = ["subnet-abcde012", "subnet-bcde012a"]
tags = "${map("Environment", "test")}"
tags = {
"Environment" = "test"
}
vpc_id = "vpc-abcde012"
}
```
Expand Down Expand Up @@ -62,8 +64,8 @@ Documentation should be modified within `main.tf` and generated using [terraform
Generate them like so:

```bash
go get github.com/segmentio/terraform-docs
terraform-docs md ./ | cat -s | tail -r | tail -n +2 | tail -r > README.md
go get github.com/getcloudnative/terraform-docs
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why would we switch to this random forked repo for terraform-docs?

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

At least this fork doesn't have "This project currently has no active maintainers" label.
I wanted features included in this PR: terraform-docs/terraform-docs#53
However, if you don't like it, I don't force this change.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

"This project currently has no active maintainers"

Ahh I see!

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hi, I am the author of the afore-mentioned PR. Unfortunately, folks at segmentio have not been responsive to my suggestion of having me join as a maintainer 🤞

Copy link
Contributor

@brandonjbjelland brandonjbjelland Aug 23, 2018

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This needs to stay: | cat -s | tail -r | tail -n +2 | tail -r. md formatting guidelines, just like terraform's formatting standards, shouldn't be considered optional and this one-liner helps us maintain that.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

hmm it doesn't look like I have tac on my Mac. I'd be okay either a conditional or just explicitly stating:

# on Mac
terraform-docs md ./ | cat -s | tail -r | tail -n +2 | tail -r > README.md
# on Linux
terraform-docs md ./ | cat -s | tac | tail -n +2 | tac > README.md

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why would we switch to this random forked repo for terraform-docs?

I am happy to inform you that segmentio/terraform-docs is again under active maintenance. The features in getcloudnative/terraform-docs, including the one that generates default values for aggregate types (which has been emphasized in the discussion further down below at #95 (comment)), have been merged into master. The option you want to look out for is --with-aggregate-type-defaults.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Great, thanks for the update @metmajer !

I see that 0.4 was just released but this isn't available via brew. Is that correct?

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

That's correct, I'm still working on this. The PR to brew is planned to go out this week.

Copy link

@metmajer metmajer Oct 9, 2018

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@max-rocket-internet it took a bit longer than initially planned, but we now have a HomebrewFormula/terraform-docs.rb as part of the project, which will be updated with each release and contributed to the upstream Homebrew/homebrew-core repository. Since right now, we have the latest v0.4.5 available as binary releases and via brew.

terraform-docs md ./ > README.md
```

## Contributing
Expand All @@ -90,6 +92,7 @@ Many thanks to [the contributors listed here](https://github.com/terraform-aws-m

MIT Licensed. See [LICENSE](https://github.com/terraform-aws-modules/terraform-aws-eks/tree/master/LICENSE) for full details.


Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is what I'm trying to avoid with my comment above.

## Inputs

| Name | Description | Type | Default | Required |
Expand All @@ -98,22 +101,22 @@ MIT Licensed. See [LICENSE](https://github.com/terraform-aws-modules/terraform-a
| cluster_security_group_id | If provided, the EKS cluster will be attached to this security group. If not given, a security group will be created with necessary ingres/egress to work with the workers and provide API access to your current IP/32. | string | `` | no |
| cluster_version | Kubernetes version to use for the EKS cluster. | string | `1.10` | no |
| config_output_path | Determines where config files are placed if using configure_kubectl_session and you want config files to land outside the current working directory. Should end in a forward slash / . | string | `./` | no |
| kubeconfig_aws_authenticator_additional_args | Any additional arguments to pass to the authenticator such as the role to assume. e.g. ["-r", "MyEksRole"]. | list | `<list>` | no |
| kubeconfig_aws_authenticator_additional_args | Any additional arguments to pass to the authenticator such as the role to assume. e.g. ["-r", "MyEksRole"]. | list | `[]` | no |
| kubeconfig_aws_authenticator_command | Command to use to to fetch AWS EKS credentials. | string | `aws-iam-authenticator` | no |
| kubeconfig_aws_authenticator_env_variables | Environment variables that should be used when executing the authenticator. e.g. { AWS_PROFILE = "eks"}. | map | `<map>` | no |
| kubeconfig_aws_authenticator_env_variables | Environment variables that should be used when executing the authenticator. e.g. { AWS_PROFILE = "eks"}. | map | `{}` | no |
| kubeconfig_name | Override the default name used for items kubeconfig. | string | `` | no |
| manage_aws_auth | Whether to write and apply the aws-auth configmap file. | string | `true` | no |
| map_accounts | Additional AWS account numbers to add to the aws-auth configmap. See examples/eks_test_fixture/variables.tf for example format. | list | `<list>` | no |
| map_roles | Additional IAM roles to add to the aws-auth configmap. See examples/eks_test_fixture/variables.tf for example format. | list | `<list>` | no |
| map_users | Additional IAM users to add to the aws-auth configmap. See examples/eks_test_fixture/variables.tf for example format. | list | `<list>` | no |
| map_accounts | Additional AWS account numbers to add to the aws-auth configmap. See examples/eks_test_fixture/variables.tf for example format. | list | `[]` | no |
| map_roles | Additional IAM roles to add to the aws-auth configmap. See examples/eks_test_fixture/variables.tf for example format. | list | `[]` | no |
| map_users | Additional IAM users to add to the aws-auth configmap. See examples/eks_test_fixture/variables.tf for example format. | list | `[]` | no |
| subnets | A list of subnets to place the EKS cluster and workers within. | list | - | yes |
| tags | A map of tags to add to all resources. | map | `<map>` | no |
| tags | A map of tags to add to all resources. | map | `{}` | no |
| vpc_id | VPC where the cluster and workers will be deployed. | string | - | yes |
| worker_group_count | The number of maps contained within the worker_groups list. | string | `1` | no |
| worker_groups | A list of maps defining worker group configurations. See workers_group_defaults for valid keys. | list | `<list>` | no |
| worker_groups | A list of maps defining worker group configurations. See workers_group_defaults for valid keys. | list | `[ { "name": "default" } ]` | no |
| worker_security_group_id | If provided, all workers will be attached to this security group. If not given, a security group will be created with necessary ingres/egress to work with the EKS cluster. | string | `` | no |
| worker_sg_ingress_from_port | Minimum port number from which pods will accept communication. Must be changed to a lower value if some pods in your cluster will expose a port lower than 1025 (e.g. 22, 80, or 443). | string | `1025` | no |
| workers_group_defaults | Default values for target groups as defined by the list of maps. | map | `<map>` | no |
| workers_group_defaults | Default values for target groups as defined by the list of maps. | map | `{ "additional_userdata": "", "ami_id": "", "asg_desired_capacity": "1", "asg_max_size": "3", "asg_min_size": "1", "distro": "amazon", "ebs_optimized": true, "instance_type": "m4.large", "key_name": "", "kubelet_node_labels": "", "name": "count.index", "pre_userdata": "", "public_ip": false, "root_iops": "0", "root_volume_size": "100", "root_volume_type": "gp2", "spot_price": "", "subnets": "" }` | no |
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

👏

| write_kubeconfig | Whether to write a kubeconfig file containing the cluster configuration. | string | `true` | no |

## Outputs
Expand All @@ -132,3 +135,4 @@ MIT Licensed. See [LICENSE](https://github.com/terraform-aws-modules/terraform-a
| worker_security_group_id | Security group ID attached to the EKS workers. |
| workers_asg_arns | IDs of the autoscaling groups containing workers. |
| workers_asg_names | Names of the autoscaling groups containing workers. |

22 changes: 16 additions & 6 deletions data.tf
Original file line number Diff line number Diff line change
Expand Up @@ -15,7 +15,7 @@ data "aws_iam_policy_document" "workers_assume_role_policy" {
}
}

data "aws_ami" "eks_worker" {
data "aws_ami" "eks_worker_amazon" {
filter {
name = "name"
values = ["eks-worker-*"]
Expand All @@ -25,6 +25,16 @@ data "aws_ami" "eks_worker" {
owners = ["602401143452"] # Amazon
}

data "aws_ami" "eks_worker_ubuntu" {
filter {
name = "name"
values = ["ubuntu-eks/*"]
}

most_recent = true
owners = ["099720109477"] # Canonical
}

data "aws_iam_policy_document" "cluster_assume_role_policy" {
statement {
sid = "EKSClusterAssumeRole"
Expand Down Expand Up @@ -70,17 +80,17 @@ EOF
}

data "template_file" "userdata" {
template = "${file("${path.module}/templates/userdata.sh.tpl")}"
template = "${lookup(local.distros[lookup(var.worker_groups[count.index], "distro", var.workers_group_defaults["distro"])], "userdata_tpl")}"
count = "${var.worker_group_count}"

vars {
region = "${data.aws_region.current.name}"
cluster_name = "${aws_eks_cluster.this.name}"
endpoint = "${aws_eks_cluster.this.endpoint}"
cluster_auth_base64 = "${aws_eks_cluster.this.certificate_authority.0.data}"
max_pod_count = "${lookup(local.max_pod_per_node, lookup(var.worker_groups[count.index], "instance_type", lookup(var.workers_group_defaults, "instance_type")))}"
pre_userdata = "${lookup(var.worker_groups[count.index], "pre_userdata",lookup(var.workers_group_defaults, "pre_userdata"))}"
additional_userdata = "${lookup(var.worker_groups[count.index], "additional_userdata",lookup(var.workers_group_defaults, "additional_userdata"))}"
kubelet_node_labels = "${lookup(var.worker_groups[count.index], "kubelet_node_labels",lookup(var.workers_group_defaults, "kubelet_node_labels"))}"
max_pod_count = "${lookup(local.max_pod_per_node, lookup(var.worker_groups[count.index], "instance_type", var.workers_group_defaults["instance_type"]))}"
pre_userdata = "${lookup(var.worker_groups[count.index], "pre_userdata", var.workers_group_defaults["pre_userdata"])}"
additional_userdata = "${lookup(var.worker_groups[count.index], "additional_userdata", var.workers_group_defaults["additional_userdata"])}"
kubelet_node_labels = "${lookup(var.worker_groups[count.index], "kubelet_node_labels", var.workers_group_defaults["kubelet_node_labels"])}"
}
}
60 changes: 33 additions & 27 deletions examples/eks_test_fixture/main.tf
Original file line number Diff line number Diff line change
Expand Up @@ -18,34 +18,40 @@ locals {

# the commented out worker group list below shows an example of how to define
# multiple worker groups of differing configurations
# worker_groups = "${list(
# map("asg_desired_capacity", "2",
# "asg_max_size", "10",
# "asg_min_size", "2",
# "instance_type", "m4.xlarge",
# "name", "worker_group_a",
# "subnets", "${join(",", module.vpc.private_subnets)}",
# ),
# map("asg_desired_capacity", "1",
# "asg_max_size", "5",
# "asg_min_size", "1",
# "instance_type", "m4.2xlarge",
# "name", "worker_group_b",
# "subnets", "${join(",", module.vpc.private_subnets)}",
# ),
# )}"
# worker_groups = [
# {
# name = "worker_group_a"
# instance_type = "m4.xlarge"
# distro = "ubuntu"
# asg_desired_capacity = 2
# asg_min_size = 2
# asg_max_size = 10
# subnets = "${join(",", module.vpc.private_subnets)}"
# },
# {
# name = "worker_group_b"
# instance_type = "m4.2xlarge"
# distro = "amazon"
# asg_desired_capacity = 1
# asg_min_size = 1
# asg_max_size = 5
# subnets = "${join(",", module.vpc.private_subnets)}"
# },
# ]

worker_groups = "${list(
map("instance_type","t2.small",
"additional_userdata","echo foo bar",
"subnets", "${join(",", module.vpc.private_subnets)}",
),
)}"
tags = "${map("Environment", "test",
"GithubRepo", "terraform-aws-eks",
"GithubOrg", "terraform-aws-modules",
"Workspace", "${terraform.workspace}",
)}"
worker_groups = [
{
instance_type = "t2.small"
additional_userdata = "echo foo bar"
subnets = "${join(",", module.vpc.private_subnets)}"
},
]
tags = {
"Environment" = "test"
"GithubRepo" = "terraform-aws-eks"
"GithubOrg" = "terraform-aws-modules"
"Workspace" = "${terraform.workspace}"
}
}

resource "random_string" "suffix" {
Expand Down
12 changes: 12 additions & 0 deletions local.tf
Original file line number Diff line number Diff line change
Expand Up @@ -187,4 +187,16 @@ locals {
"x1e.8xlarge" = true
"x1e.xlarge" = true
}

distros = {
amazon = {
ami_id = "${data.aws_ami.eks_worker_amazon.id}"
userdata_tpl = "${file("${path.module}/templates/userdata.sh.tpl")}"
}

ubuntu = {
ami_id = "${data.aws_ami.eks_worker_ubuntu.id}"
userdata_tpl = "${file("${path.module}/templates/userdata.yaml.tpl")}"
}
}
}
8 changes: 5 additions & 3 deletions main.tf
Original file line number Diff line number Diff line change
Expand Up @@ -27,7 +27,9 @@
* source = "terraform-aws-modules/eks/aws"
* cluster_name = "test-eks-cluster"
* subnets = ["subnet-abcde012", "subnet-bcde012a"]
* tags = "${map("Environment", "test")}"
* tags = {
* "Environment" = "test"
* }
* vpc_id = "vpc-abcde012"
* }
* ```
Expand Down Expand Up @@ -63,8 +65,8 @@
* Generate them like so:

* ```bash
* go get github.com/segmentio/terraform-docs
* terraform-docs md ./ | cat -s | tail -r | tail -n +2 | tail -r > README.md
* go get github.com/getcloudnative/terraform-docs
* terraform-docs md ./ > README.md
* ```

* ## Contributing
Expand Down
5 changes: 5 additions & 0 deletions templates/userdata.yaml.tpl
Original file line number Diff line number Diff line change
@@ -0,0 +1,5 @@
#cloud-config
# Allow user supplied pre userdata
${pre_userdata}
# Allow user supplied userdata
${additional_userdata}
3 changes: 2 additions & 1 deletion variables.tf
Original file line number Diff line number Diff line change
Expand Up @@ -82,12 +82,13 @@ variable "workers_group_defaults" {
default = {
name = "count.index" # Name of the worker group. Literal count.index will never be used but if name is not set, the count.index interpolation will be used.
ami_id = "" # AMI ID for the eks workers. If none is provided, Terraform will search for the latest version of their EKS optimized worker AMI.
distro = "amazon" # Worker distro (amazon or ubuntu).
asg_desired_capacity = "1" # Desired worker capacity in the autoscaling group.
asg_max_size = "3" # Maximum worker capacity in the autoscaling group.
asg_min_size = "1" # Minimum worker capacity in the autoscaling group.
instance_type = "m4.large" # Size of the workers instances.
spot_price = "" # Cost of spot instance.
root_volume_size = "100" # root volume size of workers instances.
root_volume_size = "100" # root volume size of workers instances.
root_volume_type = "gp2" # root volume type of workers instances, can be 'standard', 'gp2', or 'io1'
root_iops = "0" # The amount of provisioned IOPS. This must be set with a volume_type of "io1".
key_name = "" # The key name that should be used for the instances in the autoscaling group
Expand Down
50 changes: 28 additions & 22 deletions workers.tf
Original file line number Diff line number Diff line change
@@ -1,18 +1,23 @@
resource "aws_autoscaling_group" "workers" {
name_prefix = "${aws_eks_cluster.this.name}-${lookup(var.worker_groups[count.index], "name", count.index)}"
desired_capacity = "${lookup(var.worker_groups[count.index], "asg_desired_capacity", lookup(var.workers_group_defaults, "asg_desired_capacity"))}"
max_size = "${lookup(var.worker_groups[count.index], "asg_max_size",lookup(var.workers_group_defaults, "asg_max_size"))}"
min_size = "${lookup(var.worker_groups[count.index], "asg_min_size",lookup(var.workers_group_defaults, "asg_min_size"))}"
desired_capacity = "${lookup(var.worker_groups[count.index], "asg_desired_capacity", var.workers_group_defaults["asg_desired_capacity"])}"
max_size = "${lookup(var.worker_groups[count.index], "asg_max_size", var.workers_group_defaults["asg_max_size"])}"
min_size = "${lookup(var.worker_groups[count.index], "asg_min_size", var.workers_group_defaults["asg_min_size"])}"
launch_configuration = "${element(aws_launch_configuration.workers.*.id, count.index)}"
vpc_zone_identifier = ["${split(",", coalesce(lookup(var.worker_groups[count.index], "subnets", ""), join(",", var.subnets)))}"]
count = "${var.worker_group_count}"

tags = ["${concat(
list(
map("key", "Name", "value", "${aws_eks_cluster.this.name}-${lookup(var.worker_groups[count.index], "name", count.index)}-eks_asg", "propagate_at_launch", true),
map("key", "kubernetes.io/cluster/${aws_eks_cluster.this.name}", "value", "owned", "propagate_at_launch", true),
map("key", "Name", "value", "${aws_eks_cluster.this.name}-${lookup(var.worker_groups[count.index], "name", count.index)}-eks_asg", "propagate_at_launch", "true"),
map("key", "kubernetes.io/cluster/${aws_eks_cluster.this.name}", "value", "owned", "propagate_at_launch", "true"),
),
local.asg_tags)
local.asg_tags,
list(
map("key", "${lookup(var.worker_groups[count.index], "distro", var.workers_group_defaults["distro"]) == "ubuntu" ? "com.ubuntu.cloud:eks:kubelet:" : ""}max-pods-per-node",
"value", "${local.max_pod_per_node[lookup(var.worker_groups[count.index], "instance_type", var.workers_group_defaults["instance_type"])]}",
"propagate_at_launch", "true")
))
}"]

lifecycle {
Expand All @@ -22,25 +27,26 @@ resource "aws_autoscaling_group" "workers" {

resource "aws_launch_configuration" "workers" {
name_prefix = "${aws_eks_cluster.this.name}-${lookup(var.worker_groups[count.index], "name", count.index)}"
associate_public_ip_address = "${lookup(var.worker_groups[count.index], "public_ip", lookup(var.workers_group_defaults, "public_ip"))}"
associate_public_ip_address = "${lookup(var.worker_groups[count.index], "public_ip", var.workers_group_defaults["public_ip"])}"
security_groups = ["${local.worker_security_group_id}"]
iam_instance_profile = "${aws_iam_instance_profile.workers.id}"
image_id = "${lookup(var.worker_groups[count.index], "ami_id", data.aws_ami.eks_worker.id)}"
instance_type = "${lookup(var.worker_groups[count.index], "instance_type", lookup(var.workers_group_defaults, "instance_type"))}"
key_name = "${lookup(var.worker_groups[count.index], "key_name", lookup(var.workers_group_defaults, "key_name"))}"
user_data_base64 = "${base64encode(element(data.template_file.userdata.*.rendered, count.index))}"
ebs_optimized = "${lookup(var.worker_groups[count.index], "ebs_optimized", lookup(local.ebs_optimized, lookup(var.worker_groups[count.index], "instance_type", lookup(var.workers_group_defaults, "instance_type")), false))}"
spot_price = "${lookup(var.worker_groups[count.index], "spot_price", lookup(var.workers_group_defaults, "spot_price"))}"
count = "${var.worker_group_count}"
image_id = "${lookup(var.worker_groups[count.index], "ami_id", lookup(local.distros[lookup(var.worker_groups[count.index], "distro", var.workers_group_defaults["distro"])], "ami_id"))}"

instance_type = "${lookup(var.worker_groups[count.index], "instance_type", var.workers_group_defaults["instance_type"])}"
key_name = "${lookup(var.worker_groups[count.index], "key_name", var.workers_group_defaults["key_name"])}"
user_data_base64 = "${base64encode(element(data.template_file.userdata.*.rendered, count.index))}"
ebs_optimized = "${lookup(var.worker_groups[count.index], "ebs_optimized", lookup(local.ebs_optimized, lookup(var.worker_groups[count.index], "instance_type", var.workers_group_defaults["instance_type"]), false))}"
spot_price = "${lookup(var.worker_groups[count.index], "spot_price", var.workers_group_defaults["spot_price"])}"
count = "${var.worker_group_count}"

lifecycle {
create_before_destroy = true
}

root_block_device {
volume_size = "${lookup(var.worker_groups[count.index], "root_volume_size", lookup(var.workers_group_defaults, "root_volume_size"))}"
volume_type = "${lookup(var.worker_groups[count.index], "root_volume_type", lookup(var.workers_group_defaults, "root_volume_type"))}"
iops = "${lookup(var.worker_groups[count.index], "root_iops", lookup(var.workers_group_defaults, "root_iops"))}"
volume_size = "${lookup(var.worker_groups[count.index], "root_volume_size", var.workers_group_defaults["root_volume_size"])}"
volume_type = "${lookup(var.worker_groups[count.index], "root_volume_type", var.workers_group_defaults["root_volume_type"])}"
iops = "${lookup(var.worker_groups[count.index], "root_iops", var.workers_group_defaults["root_iops"])}"
delete_on_termination = true
}
}
Expand Down Expand Up @@ -115,9 +121,9 @@ resource "aws_iam_role_policy_attachment" "workers_AmazonEC2ContainerRegistryRea
resource "null_resource" "tags_as_list_of_maps" {
count = "${length(keys(var.tags))}"

triggers = "${map(
"key", "${element(keys(var.tags), count.index)}",
"value", "${element(values(var.tags), count.index)}",
"propagate_at_launch", "true"
)}"
triggers = {
key = "${element(keys(var.tags), count.index)}"
value = "${element(values(var.tags), count.index)}"
propagate_at_launch = "true"
}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

this looks nicer. Thanks! 👏

}