Skip to content

Commit

Permalink
Split Terraform files.
Browse files Browse the repository at this point in the history
Import keypair
  • Loading branch information
nicusX committed Aug 25, 2016
1 parent 2971aae commit 0849028
Show file tree
Hide file tree
Showing 11 changed files with 406 additions and 383 deletions.
62 changes: 33 additions & 29 deletions README.md
Original file line number Diff line number Diff line change
@@ -1,7 +1,7 @@
# Kubernetes not the hardest way (or "Provisioning a Kubernetes Cluster on AWS using Terraform and Ansible")

The goal of this sample project is provisioning AWS infrastructure and Kubernetes cluster, using Terraform and Ansible.
This is not meant to be production-ready, but to provide a realistic example, beyond the usual "Hello, world" ones.
It is not meant to be production-ready, but to provide a realistic example, beyond the usual "Hello, world" ones.

Please refer to the companion blog posts: https://opencredo.com/kubernetes-aws-terraform-ansible-1/

Expand Down Expand Up @@ -46,7 +46,7 @@ Requirements on control machine:

### Linux installation

The same as OSX, except you will use the packager manager of the distribution you are using.
The same as OSX, except you will use the package manager of the distribution you are using.
Remember Ansible requires Python 2.5+ and is not compatible with Python 3.

### Windows installation
Expand All @@ -56,18 +56,16 @@ Seriously? ;)

## Credentials

### AWS Keypair

The easiest way to generate key-pairs is using AWS console. This creates the identity file (`.pem`) in the correct format for AWS.
### AWS KeyPair

**The key-pair must be already loaded in AWS.**
**The identity file must be downloaded on the machine running Terraform and Ansible.**
You need a valid AWS Identity (`.pem`) file and Public Key. Terraform will import the [KeyPair](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ec2-key-pairs.html) and Ansible will use the Identity to SSH into the machines.

The key-pair name must be specified as part of the environment setup (see below).
Please read [AWS Documentation](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ec2-key-pairs.html#how-to-generate-your-own-key-and-import-it-to-aws) about supported formats.

### Terraform and Ansible authentication

Both Terraform and Ansible expects AWS credentials in environment variables:
Both Terraform and Ansible expect AWS credentials in environment variables:
```
$ export AWS_ACCESS_KEY_ID=<access-key-id>
$ export AWS_SECRET_ACCESS_KEY="<secret-key>"
Expand All @@ -80,29 +78,36 @@ $ ssh-add <keypair-name>.pem

## Setup variables defining the environment

Before running Terraform, you MUST set some variables defining the environment.
Before running Terraform, you must define set some Terraform variables:

- `default_keypair_name`: AWS key-pair name for all instances. The key-Pair must be already loaded in AWS (mandatory)
- `control_cidr`: The CIDR of your IP. All instances will accept only traffic from this address only. Note this is a CIDR, not a single IP. e.g. `123.45.67.89/32` (mandatory)
- `default_keypair_public_key`: Valid public key corresponding to the Identity you will use to SSH into VMs. e.g. `"ssh-rsa AAA....xyz"` (mandatory)


You may also optionally redefine these variables:

- `default_keypair_name`: AWS key-pair name for all instances. (Default: "k8s-not-the-hardest-way")
- `vpc_name`: VPC Name. Must be unique in the AWS Account (Default: "kubernetes")
- `elb_name`: ELB Name for Kubernetes API. Can only contain characters valid for DNS names. Must be unique in the AWS Account (Default: "kubernetes")
- `owner`: `Owner` tag added to all AWS resources. No functional use. This is useful if you are sharing the same AWS account with others, to quickly filter your resources on AWS console. (Default: "kubernetes")
- `owner`: `Owner` tag added to all AWS resources. No functional use. It useful if you are sharing the same AWS account with others, to filter your resources on AWS console. (Default: "kubernetes")


You may either set a `TF_VAR_<var-name>` environment variables for each of them, or create a `.tfvars` file (e.g. `environment.tfvars`) and pass it as parameter to Terraform:
```
$ terraform plan -var-file=environment.tfvars
```

Example of `environment.tfvars`:
The easiest way to do it is creating a `terraform.tfvars` [variable file](https://www.terraform.io/docs/configuration/variables.html#variable-files) in `./terraform` directory. Terraform automatically includes this file.

Example of `terraform.tfvars` variable file:
```
default_keypair_name = "lorenzo-oc"
# Mandatory
default_keypair_public_key = "ssh-rsa AAA...zzz"
control_cidr = "123.45.67.89/32"
vpc_name = "Lorenzo Kubernetes"
elb_name = "lorenzo-kubernetes"
# Optional
default_keypair_name = "lorenzo-glf"
vpc_name = "Lorenzo ETCD"
elb_name = "lorenzo-etcd"
owner = "Lorenzo"
```


### Changing AWS Region

By default, this uses "eu-west-1" AWS Region.
Expand All @@ -111,18 +116,18 @@ To use a different Region, you have to change two additional Terraform variables

- `region`: AWS Region (default: "eu-west-1"). Also see "Changing AWS Region", below.
- `zone`: AWS Availability Zone, in the selected Region (default: "eu-west-1a")
- `default_ami`: Choose an AMI with Unbuntu 16.04 LTS HVM, EBS-SSD, available in the new Region
- `default_ami`: Choose an AMI with Ubuntu 16.04 LTS HVM, EBS-SSD, available in the new Region

You also have to **manually** modify the `./ansible/hosts/ec2.ini`, changing `regions = eu-west-1` to the Region you are using.

## Provision infrastructure with Terraform

(run Terraform commands from `./terraform` subdirectory)
Run Terraform commands from `./terraform` subdirectory

```
$ terraform apply -var-file=environment.tfvars
$ terraform plan
$ terraform apply
```
(if you are setting up the environment using `TF_VAR_*` env variable, you may omit `-var-file=environment.tfvars`)


Terraform outputs the public DNS name to access Kubernetes API and Workers public IP.
Expand All @@ -142,16 +147,17 @@ Take note of both DNS name and workers IP addresses. You will need them later (t

Terraform also generates `ssh.cfg` file locally, containing the aliases for accessing all VMs by name (`controller0..2`, `etcd0..2`, `worker0..2`).

This configuration file is useful for directly SSH into machines. It is NOT used by Ansible.

e.g. to access instance `worker0`
```
$ ssh -F ssh.cfg worker0
```
This configuration file is useful for directly SSH into machines. It is NOT used by Ansible.


## Install Kubernetes with Ansible

(run all Ansible commands from `./ansible` subdirectory)
Run Ansible commands from `./ansible` subdirectory.

### Install and set up Kubernetes cluster

Expand All @@ -163,9 +169,7 @@ $ ansible-playbook infra.yaml
### Setup Kubernetes CLI

This step set up the Kubernetes CLI (`kubectl`) configuration on the control machine.
Configuration includes the DNS name of Kubernetes API endpoint, as returned by Terraform.

These configuration is required to run following steps that uses Kubernetes CLI.
The configuration includes the DNS name of Kubernetes API endpoint, as returned by Terraform.

```
$ ansible-playbook kubectl.yaml --extra-vars "kubernetes_api_endpoint=<kubernetes-api-dns-name>"
Expand Down Expand Up @@ -254,7 +258,7 @@ There are some known simplifications, compared to a production-ready solution:

- Networking setup is very simple: ALL instances have a public IP (though only accessible from a configurable Control IP).
- Infrastructure managed by direct SSH into instances (no VPN, no Bastion).
- Very basic Service Account and Secret (to change them, modify: `./ansible/roles/controller/files/token.csv` and `./ansible/roles/worker/tenplates/kubeconfig.j2`)
- Very basic Service Account and Secret (to change them, modify: `./ansible/roles/controller/files/token.csv` and `./ansible/roles/worker/templates/kubeconfig.j2`)
- No Load Balancer for the exposed NodePorts. No Load Balancer
- No fixed internal or external DNS naming (only dynamic names generated by AWS)
- No support for Kubernetes logging
Expand Down
2 changes: 1 addition & 1 deletion terraform/.gitignore
Original file line number Diff line number Diff line change
@@ -1,2 +1,2 @@
/environment.tfvars
/terraform.tfvars
/util/
6 changes: 6 additions & 0 deletions terraform/aws.tf
Original file line number Diff line number Diff line change
@@ -0,0 +1,6 @@
# Retrieve AWS credentials from env variables AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY
provider "aws" {
access_key = ""
secret_key = ""
region = "${var.region}"
}
45 changes: 45 additions & 0 deletions terraform/certificates.tf
Original file line number Diff line number Diff line change
@@ -0,0 +1,45 @@
#########################
## Generate certificates
#########################

# Generate Certificates
data "template_file" "certificates" {
template = "${file("${path.module}/template/kubernetes-csr.json")}"
depends_on = ["aws_elb.kubernetes_api","aws_instance.etcd","aws_instance.controller","aws_instance.worker"]
vars {
kubernetes_api_elb_dns_name = "${aws_elb.kubernetes_api.dns_name}"
kubernetes_cluster_dns = "${var.kubernetes_cluster_dns}"

# Unfortunately, variables must be primitives, neither lists nor maps
etcd0_ip = "${aws_instance.etcd.0.private_ip}"
etcd1_ip = "${aws_instance.etcd.1.private_ip}"
etcd2_ip = "${aws_instance.etcd.2.private_ip}"
controller0_ip = "${aws_instance.controller.0.private_ip}"
controller1_ip = "${aws_instance.controller.1.private_ip}"
controller2_ip = "${aws_instance.controller.2.private_ip}"
worker0_ip = "${aws_instance.worker.0.private_ip}"
worker1_ip = "${aws_instance.worker.1.private_ip}"
worker2_ip = "${aws_instance.worker.2.private_ip}"

etcd0_dns = "${aws_instance.etcd.0.private_dns}"
etcd1_dns = "${aws_instance.etcd.1.private_dns}"
etcd2_dns = "${aws_instance.etcd.2.private_dns}"
controller0_dns = "${aws_instance.controller.0.private_dns}"
controller1_dns = "${aws_instance.controller.1.private_dns}"
controller2_dns = "${aws_instance.controller.2.private_dns}"
worker0_dns = "${aws_instance.worker.0.private_dns}"
worker1_dns = "${aws_instance.worker.1.private_dns}"
worker2_dns = "${aws_instance.worker.2.private_dns}"
}
}
resource "null_resource" "certificates" {
triggers {
template_rendered = "${ data.template_file.certificates.rendered }"
}
provisioner "local-exec" {
command = "echo '${ data.template_file.certificates.rendered }' > ../cert/kubernetes-csr.json"
}
provisioner "local-exec" {
command = "cd ../cert; cfssl gencert -initca ca-csr.json | cfssljson -bare ca; cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes kubernetes-csr.json | cfssljson -bare kubernetes"
}
}
25 changes: 25 additions & 0 deletions terraform/etcf.tf
Original file line number Diff line number Diff line change
@@ -0,0 +1,25 @@
#########################
# etcd cluster instances
#########################

resource "aws_instance" "etcd" {
count = 3
ami = "${var.default_ami}"
instance_type = "${var.default_instance_type}"

subnet_id = "${aws_subnet.kubernetes.id}"
private_ip = "${cidrhost(var.vpc_cidr, 10 + count.index)}"
associate_public_ip_address = true # Instances have public, dynamic IP

availability_zone = "${var.zone}"
vpc_security_group_ids = ["${aws_security_group.kubernetes.id}"]
key_name = "${var.default_keypair_name}"

tags {
Owner = "${var.owner}"
Name = "etcd-${count.index}"
ansibleFilter = "${var.ansibleFilter}"
ansibleNodeType = "etcd"
ansibleNodeName = "etcd${count.index}"
}
}
99 changes: 99 additions & 0 deletions terraform/k8s_controllers.tf
Original file line number Diff line number Diff line change
@@ -0,0 +1,99 @@

############################
# K8s Control Pane instances
############################

resource "aws_instance" "controller" {

count = 3
ami = "${var.default_ami}"
instance_type = "${var.default_instance_type}"

subnet_id = "${aws_subnet.kubernetes.id}"
private_ip = "${cidrhost(var.vpc_cidr, 20 + count.index)}"
associate_public_ip_address = true # Instances have public, dynamic IP
source_dest_check = false # TODO Required??

availability_zone = "${var.zone}"
vpc_security_group_ids = ["${aws_security_group.kubernetes.id}"]
key_name = "${var.default_keypair_name}"

tags {
Owner = "${var.owner}"
Name = "controller-${count.index}"
ansibleFilter = "${var.ansibleFilter}"
ansibleNodeType = "controller"
ansibleNodeName = "controller${count.index}"
}
}

###############################
## Kubernetes API Load Balancer
###############################

resource "aws_elb" "kubernetes_api" {
name = "${var.elb_name}"
instances = ["${aws_instance.controller.*.id}"]
subnets = ["${aws_subnet.kubernetes.id}"]
cross_zone_load_balancing = false

security_groups = ["${aws_security_group.kubernetes_api.id}"]

listener {
lb_port = 6443
instance_port = 6443
lb_protocol = "TCP"
instance_protocol = "TCP"
}

health_check {
healthy_threshold = 2
unhealthy_threshold = 2
timeout = 15
target = "HTTP:8080/healthz"
interval = 30
}

tags {
Name = "kubernetes"
Owner = "${var.owner}"
}
}

############
## Security
############

resource "aws_security_group" "kubernetes_api" {
vpc_id = "${aws_vpc.kubernetes.id}"
name = "kubernetes-api"

# Allow inbound traffic to the port used by Kubernetes API HTTPS
ingress {
from_port = 6443
to_port = 6443
protocol = "TCP"
cidr_blocks = ["${var.control_cidr}"]
}

# Allow all outbound traffic
egress {
from_port = 0
to_port = 0
protocol = "-1"
cidr_blocks = ["0.0.0.0/0"]
}

tags {
Owner = "${var.owner}"
Name = "kubernetes-api"
}
}

############
## Outputs
############

output "kubernetes_api_dns_name" {
value = "${aws_elb.kubernetes_api.dns_name}"
}
Loading

0 comments on commit 0849028

Please sign in to comment.