-
-
Notifications
You must be signed in to change notification settings - Fork 4.3k
Added support for Ubuntu EKS worker nodes #95
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Changes from all commits
File filter
Filter by extension
Conversations
Jump to
Diff view
Diff view
There are no files selected for viewing
Original file line number | Diff line number | Diff line change |
---|---|---|
|
@@ -26,7 +26,9 @@ module "eks" { | |
source = "terraform-aws-modules/eks/aws" | ||
cluster_name = "test-eks-cluster" | ||
subnets = ["subnet-abcde012", "subnet-bcde012a"] | ||
tags = "${map("Environment", "test")}" | ||
tags = { | ||
"Environment" = "test" | ||
} | ||
vpc_id = "vpc-abcde012" | ||
} | ||
``` | ||
|
@@ -62,8 +64,8 @@ Documentation should be modified within `main.tf` and generated using [terraform | |
Generate them like so: | ||
|
||
```bash | ||
go get github.com/segmentio/terraform-docs | ||
terraform-docs md ./ | cat -s | tail -r | tail -n +2 | tail -r > README.md | ||
go get github.com/getcloudnative/terraform-docs | ||
terraform-docs md ./ > README.md | ||
``` | ||
|
||
## Contributing | ||
|
@@ -90,6 +92,7 @@ Many thanks to [the contributors listed here](https://github.com/terraform-aws-m | |
|
||
MIT Licensed. See [LICENSE](https://github.com/terraform-aws-modules/terraform-aws-eks/tree/master/LICENSE) for full details. | ||
|
||
|
||
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. This is what I'm trying to avoid with my comment above. |
||
## Inputs | ||
|
||
| Name | Description | Type | Default | Required | | ||
|
@@ -98,22 +101,22 @@ MIT Licensed. See [LICENSE](https://github.com/terraform-aws-modules/terraform-a | |
| cluster_security_group_id | If provided, the EKS cluster will be attached to this security group. If not given, a security group will be created with necessary ingres/egress to work with the workers and provide API access to your current IP/32. | string | `` | no | | ||
| cluster_version | Kubernetes version to use for the EKS cluster. | string | `1.10` | no | | ||
| config_output_path | Determines where config files are placed if using configure_kubectl_session and you want config files to land outside the current working directory. Should end in a forward slash / . | string | `./` | no | | ||
| kubeconfig_aws_authenticator_additional_args | Any additional arguments to pass to the authenticator such as the role to assume. e.g. ["-r", "MyEksRole"]. | list | `<list>` | no | | ||
| kubeconfig_aws_authenticator_additional_args | Any additional arguments to pass to the authenticator such as the role to assume. e.g. ["-r", "MyEksRole"]. | list | `[]` | no | | ||
| kubeconfig_aws_authenticator_command | Command to use to to fetch AWS EKS credentials. | string | `aws-iam-authenticator` | no | | ||
| kubeconfig_aws_authenticator_env_variables | Environment variables that should be used when executing the authenticator. e.g. { AWS_PROFILE = "eks"}. | map | `<map>` | no | | ||
| kubeconfig_aws_authenticator_env_variables | Environment variables that should be used when executing the authenticator. e.g. { AWS_PROFILE = "eks"}. | map | `{}` | no | | ||
| kubeconfig_name | Override the default name used for items kubeconfig. | string | `` | no | | ||
| manage_aws_auth | Whether to write and apply the aws-auth configmap file. | string | `true` | no | | ||
| map_accounts | Additional AWS account numbers to add to the aws-auth configmap. See examples/eks_test_fixture/variables.tf for example format. | list | `<list>` | no | | ||
| map_roles | Additional IAM roles to add to the aws-auth configmap. See examples/eks_test_fixture/variables.tf for example format. | list | `<list>` | no | | ||
| map_users | Additional IAM users to add to the aws-auth configmap. See examples/eks_test_fixture/variables.tf for example format. | list | `<list>` | no | | ||
| map_accounts | Additional AWS account numbers to add to the aws-auth configmap. See examples/eks_test_fixture/variables.tf for example format. | list | `[]` | no | | ||
| map_roles | Additional IAM roles to add to the aws-auth configmap. See examples/eks_test_fixture/variables.tf for example format. | list | `[]` | no | | ||
| map_users | Additional IAM users to add to the aws-auth configmap. See examples/eks_test_fixture/variables.tf for example format. | list | `[]` | no | | ||
| subnets | A list of subnets to place the EKS cluster and workers within. | list | - | yes | | ||
| tags | A map of tags to add to all resources. | map | `<map>` | no | | ||
| tags | A map of tags to add to all resources. | map | `{}` | no | | ||
| vpc_id | VPC where the cluster and workers will be deployed. | string | - | yes | | ||
| worker_group_count | The number of maps contained within the worker_groups list. | string | `1` | no | | ||
| worker_groups | A list of maps defining worker group configurations. See workers_group_defaults for valid keys. | list | `<list>` | no | | ||
| worker_groups | A list of maps defining worker group configurations. See workers_group_defaults for valid keys. | list | `[ { "name": "default" } ]` | no | | ||
| worker_security_group_id | If provided, all workers will be attached to this security group. If not given, a security group will be created with necessary ingres/egress to work with the EKS cluster. | string | `` | no | | ||
| worker_sg_ingress_from_port | Minimum port number from which pods will accept communication. Must be changed to a lower value if some pods in your cluster will expose a port lower than 1025 (e.g. 22, 80, or 443). | string | `1025` | no | | ||
| workers_group_defaults | Default values for target groups as defined by the list of maps. | map | `<map>` | no | | ||
| workers_group_defaults | Default values for target groups as defined by the list of maps. | map | `{ "additional_userdata": "", "ami_id": "", "asg_desired_capacity": "1", "asg_max_size": "3", "asg_min_size": "1", "distro": "amazon", "ebs_optimized": true, "instance_type": "m4.large", "key_name": "", "kubelet_node_labels": "", "name": "count.index", "pre_userdata": "", "public_ip": false, "root_iops": "0", "root_volume_size": "100", "root_volume_type": "gp2", "spot_price": "", "subnets": "" }` | no | | ||
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. 👏 |
||
| write_kubeconfig | Whether to write a kubeconfig file containing the cluster configuration. | string | `true` | no | | ||
|
||
## Outputs | ||
|
@@ -132,3 +135,4 @@ MIT Licensed. See [LICENSE](https://github.com/terraform-aws-modules/terraform-a | |
| worker_security_group_id | Security group ID attached to the EKS workers. | | ||
| workers_asg_arns | IDs of the autoscaling groups containing workers. | | ||
| workers_asg_names | Names of the autoscaling groups containing workers. | | ||
|
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,5 @@ | ||
#cloud-config | ||
# Allow user supplied pre userdata | ||
${pre_userdata} | ||
# Allow user supplied userdata | ||
${additional_userdata} |
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -1,18 +1,23 @@ | ||
resource "aws_autoscaling_group" "workers" { | ||
name_prefix = "${aws_eks_cluster.this.name}-${lookup(var.worker_groups[count.index], "name", count.index)}" | ||
desired_capacity = "${lookup(var.worker_groups[count.index], "asg_desired_capacity", lookup(var.workers_group_defaults, "asg_desired_capacity"))}" | ||
max_size = "${lookup(var.worker_groups[count.index], "asg_max_size",lookup(var.workers_group_defaults, "asg_max_size"))}" | ||
min_size = "${lookup(var.worker_groups[count.index], "asg_min_size",lookup(var.workers_group_defaults, "asg_min_size"))}" | ||
desired_capacity = "${lookup(var.worker_groups[count.index], "asg_desired_capacity", var.workers_group_defaults["asg_desired_capacity"])}" | ||
max_size = "${lookup(var.worker_groups[count.index], "asg_max_size", var.workers_group_defaults["asg_max_size"])}" | ||
min_size = "${lookup(var.worker_groups[count.index], "asg_min_size", var.workers_group_defaults["asg_min_size"])}" | ||
launch_configuration = "${element(aws_launch_configuration.workers.*.id, count.index)}" | ||
vpc_zone_identifier = ["${split(",", coalesce(lookup(var.worker_groups[count.index], "subnets", ""), join(",", var.subnets)))}"] | ||
count = "${var.worker_group_count}" | ||
|
||
tags = ["${concat( | ||
list( | ||
map("key", "Name", "value", "${aws_eks_cluster.this.name}-${lookup(var.worker_groups[count.index], "name", count.index)}-eks_asg", "propagate_at_launch", true), | ||
map("key", "kubernetes.io/cluster/${aws_eks_cluster.this.name}", "value", "owned", "propagate_at_launch", true), | ||
map("key", "Name", "value", "${aws_eks_cluster.this.name}-${lookup(var.worker_groups[count.index], "name", count.index)}-eks_asg", "propagate_at_launch", "true"), | ||
map("key", "kubernetes.io/cluster/${aws_eks_cluster.this.name}", "value", "owned", "propagate_at_launch", "true"), | ||
), | ||
local.asg_tags) | ||
local.asg_tags, | ||
list( | ||
map("key", "${lookup(var.worker_groups[count.index], "distro", var.workers_group_defaults["distro"]) == "ubuntu" ? "com.ubuntu.cloud:eks:kubelet:" : ""}max-pods-per-node", | ||
"value", "${local.max_pod_per_node[lookup(var.worker_groups[count.index], "instance_type", var.workers_group_defaults["instance_type"])]}", | ||
"propagate_at_launch", "true") | ||
)) | ||
}"] | ||
|
||
lifecycle { | ||
|
@@ -22,25 +27,26 @@ resource "aws_autoscaling_group" "workers" { | |
|
||
resource "aws_launch_configuration" "workers" { | ||
name_prefix = "${aws_eks_cluster.this.name}-${lookup(var.worker_groups[count.index], "name", count.index)}" | ||
associate_public_ip_address = "${lookup(var.worker_groups[count.index], "public_ip", lookup(var.workers_group_defaults, "public_ip"))}" | ||
associate_public_ip_address = "${lookup(var.worker_groups[count.index], "public_ip", var.workers_group_defaults["public_ip"])}" | ||
security_groups = ["${local.worker_security_group_id}"] | ||
iam_instance_profile = "${aws_iam_instance_profile.workers.id}" | ||
image_id = "${lookup(var.worker_groups[count.index], "ami_id", data.aws_ami.eks_worker.id)}" | ||
instance_type = "${lookup(var.worker_groups[count.index], "instance_type", lookup(var.workers_group_defaults, "instance_type"))}" | ||
key_name = "${lookup(var.worker_groups[count.index], "key_name", lookup(var.workers_group_defaults, "key_name"))}" | ||
user_data_base64 = "${base64encode(element(data.template_file.userdata.*.rendered, count.index))}" | ||
ebs_optimized = "${lookup(var.worker_groups[count.index], "ebs_optimized", lookup(local.ebs_optimized, lookup(var.worker_groups[count.index], "instance_type", lookup(var.workers_group_defaults, "instance_type")), false))}" | ||
spot_price = "${lookup(var.worker_groups[count.index], "spot_price", lookup(var.workers_group_defaults, "spot_price"))}" | ||
count = "${var.worker_group_count}" | ||
image_id = "${lookup(var.worker_groups[count.index], "ami_id", lookup(local.distros[lookup(var.worker_groups[count.index], "distro", var.workers_group_defaults["distro"])], "ami_id"))}" | ||
|
||
instance_type = "${lookup(var.worker_groups[count.index], "instance_type", var.workers_group_defaults["instance_type"])}" | ||
key_name = "${lookup(var.worker_groups[count.index], "key_name", var.workers_group_defaults["key_name"])}" | ||
user_data_base64 = "${base64encode(element(data.template_file.userdata.*.rendered, count.index))}" | ||
ebs_optimized = "${lookup(var.worker_groups[count.index], "ebs_optimized", lookup(local.ebs_optimized, lookup(var.worker_groups[count.index], "instance_type", var.workers_group_defaults["instance_type"]), false))}" | ||
spot_price = "${lookup(var.worker_groups[count.index], "spot_price", var.workers_group_defaults["spot_price"])}" | ||
count = "${var.worker_group_count}" | ||
|
||
lifecycle { | ||
create_before_destroy = true | ||
} | ||
|
||
root_block_device { | ||
volume_size = "${lookup(var.worker_groups[count.index], "root_volume_size", lookup(var.workers_group_defaults, "root_volume_size"))}" | ||
volume_type = "${lookup(var.worker_groups[count.index], "root_volume_type", lookup(var.workers_group_defaults, "root_volume_type"))}" | ||
iops = "${lookup(var.worker_groups[count.index], "root_iops", lookup(var.workers_group_defaults, "root_iops"))}" | ||
volume_size = "${lookup(var.worker_groups[count.index], "root_volume_size", var.workers_group_defaults["root_volume_size"])}" | ||
volume_type = "${lookup(var.worker_groups[count.index], "root_volume_type", var.workers_group_defaults["root_volume_type"])}" | ||
iops = "${lookup(var.worker_groups[count.index], "root_iops", var.workers_group_defaults["root_iops"])}" | ||
delete_on_termination = true | ||
} | ||
} | ||
|
@@ -115,9 +121,9 @@ resource "aws_iam_role_policy_attachment" "workers_AmazonEC2ContainerRegistryRea | |
resource "null_resource" "tags_as_list_of_maps" { | ||
count = "${length(keys(var.tags))}" | ||
|
||
triggers = "${map( | ||
"key", "${element(keys(var.tags), count.index)}", | ||
"value", "${element(values(var.tags), count.index)}", | ||
"propagate_at_launch", "true" | ||
)}" | ||
triggers = { | ||
key = "${element(keys(var.tags), count.index)}" | ||
value = "${element(values(var.tags), count.index)}" | ||
propagate_at_launch = "true" | ||
} | ||
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. this looks nicer. Thanks! 👏 |
||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Why would we switch to this random forked repo for
terraform-docs
?There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
At least this fork doesn't have "This project currently has no active maintainers" label.
I wanted features included in this PR: terraform-docs/terraform-docs#53
However, if you don't like it, I don't force this change.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Ahh I see!
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Hi, I am the author of the afore-mentioned PR. Unfortunately, folks at segmentio have not been responsive to my suggestion of having me join as a maintainer 🤞
Uh oh!
There was an error while loading. Please reload this page.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This needs to stay:
| cat -s | tail -r | tail -n +2 | tail -r
. md formatting guidelines, just like terraform's formatting standards, shouldn't be considered optional and this one-liner helps us maintain that.There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
hmm it doesn't look like I have
tac
on my Mac. I'd be okay either a conditional or just explicitly stating:There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I am happy to inform you that
segmentio/terraform-docs
is again under active maintenance. The features ingetcloudnative/terraform-docs
, including the one that generates default values for aggregate types (which has been emphasized in the discussion further down below at #95 (comment)), have been merged intomaster
. The option you want to look out for is--with-aggregate-type-defaults
.There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Great, thanks for the update @metmajer !
I see that 0.4 was just released but this isn't available via brew. Is that correct?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
That's correct, I'm still working on this. The PR to brew is planned to go out this week.
Uh oh!
There was an error while loading. Please reload this page.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@max-rocket-internet it took a bit longer than initially planned, but we now have a
HomebrewFormula/terraform-docs.rb
as part of the project, which will be updated with each release and contributed to the upstream Homebrew/homebrew-core repository. Since right now, we have the latest v0.4.5 available as binary releases and viabrew
.