Skip to content

Commit 582c72a

Browse files
committed
Initial commit
1 parent 61a44a8 commit 582c72a

File tree

11 files changed

+445
-2
lines changed

11 files changed

+445
-2
lines changed

README.md

Lines changed: 76 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -1,2 +1,76 @@
1-
# terraform-gke
2-
A terraform repo to provision a GKE cluster
1+
# k8s
2+
Here, we will go through the steps of configuring our k8s environment, playing around with the necessary commands and such, then distributing some manifests to our cluster.
3+
4+
## Steps
5+
- install local dependencies
6+
- cloud setup
7+
- enable k8s engine
8+
- create service account
9+
- git and configure
10+
- test cluster connectivity
11+
12+
### install local dependencies
13+
You need [kubectl](https://kubernetes.io/docs/tasks/tools/install-kubectl/) (cube-control) to manage and operate your k8s cluster from a local machine.
14+
- Mac
15+
```bash
16+
brew install kubernetes-cli
17+
```
18+
19+
You will need [helm](https://github.com/helm/helm) too
20+
- Mac
21+
```bash
22+
brew install kubernetes-helm
23+
```
24+
25+
You should have gcloud on your system as well. This way you can manipulate gcp from the local cli.
26+
- [Mac](https://cloud.google.com/sdk/docs/quickstart-macos)
27+
```bash
28+
brew cask install google-cloud-sdk
29+
```
30+
- [Linux](https://cloud.google.com/sdk/docs/quickstart-linux)/[Ubuntu](https://cloud.google.com/sdk/docs/quickstart-debian-ubuntu)
31+
- [Windows](https://cloud.google.com/sdk/docs/quickstart-windows)
32+
33+
### enable k8s engine
34+
- Navigate to K8s Engine in GCP Console
35+
- Enable the K8s API
36+
37+
### create service account
38+
- Navigate to the IAM section
39+
- Navigate to [Service Accounts](https://console.cloud.google.com/projectselector/iam-admin/serviceaccounts?supportedpurview=project&project=&folder=&organizationId=)
40+
- Create a new service account
41+
- Give it admin credentials
42+
- Generate a .json key_file
43+
- Store the .json key_file locally somewhere you can access later
44+
45+
### git and configure
46+
- clone the [repo](https://github.com/kawsark/terraform-gke.git)
47+
- if you don't have terraform, please [download it](https://www.terraform.io/downloads.html) or [build it from source](https://github.com/hashicorp/terraform)
48+
- change into the k8s dir
49+
```bash
50+
cd k8s
51+
```
52+
- read the comments and make the necessary changes
53+
- initialize
54+
```bash
55+
terraform init
56+
```
57+
- plan
58+
```bash
59+
terraform plan
60+
```
61+
- apply
62+
```bash
63+
terraform apply
64+
```
65+
- if you need references:
66+
- [k8s cluster on gcp](https://www.terraform.io/docs/providers/google/r/container_cluster.html)
67+
- [google provider](https://www.terraform.io/docs/providers/google/index.html)
68+
69+
### test cluster connectivity
70+
Assuming all went well and we all have k8s clusters on GCP, we should be able to navigate to our cluster from the k8s engine tab from the GCP console. Click on the connect button and copy/paste the gcloud command into your local terminal. Now run
71+
```bash
72+
kubectl get nodes
73+
```
74+
... I bet you see your cluster. If you don't, just let me know.
75+
76+
If all is well then we can move on to the k8sWithIstio directory and dig into the tool.

backend.tf.example

Lines changed: 9 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,9 @@
1+
terraform {
2+
backend "remote" {
3+
hostname = "app.terraform.io"
4+
organization = "you-tfc-org"
5+
workspaces {
6+
name = "terraform-gke-k8s-hackathon"
7+
}
8+
}
9+
}

gke.sh

Lines changed: 77 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,77 @@
1+
#!/bin/bash
2+
3+
if [ -z "$TFH_token" ] || [ -z "$TFH_org" ] || [ -z "$GOOGLE_CREDENTIALS_PATH" ] || [ -z "GOOGLE_PROJECT" ];
4+
then
5+
echo "You must set TFH_token, GOOGLE_CREDENTIALS_PATH, GOOGLE_PROJECT and TFH_org"
6+
exit 1
7+
fi
8+
9+
echo "Listing available clusters"
10+
gcloud container clusters list
11+
12+
echo "Enter a cluster name for your GKE cluster. A new name if creating a new cluster, or existing name if destroying."
13+
read cluster_name
14+
echo "Using cluster name: $cluster_name"
15+
export TFH_name="terraform-gke-k8s-$cluster_name"
16+
17+
echo 'Enter "apply" or "destroy" for this cluster (without quotes)'
18+
read operation
19+
echo "Going to perform terraform $operation on workspace $TFH_name"
20+
21+
echo "Enter a GCP region. E.g. us-east4"
22+
read region
23+
24+
echo "Enter a zone in $region. E.g. us-east4-b"
25+
read zone
26+
27+
export machine_type="n1-standard-2"
28+
export node_count=3
29+
echo "Defaulting to machine_type: $machine_type and node_count: $node_count"
30+
31+
cat <<EOF >./backend.tf
32+
terraform {
33+
backend "remote" {
34+
hostname = "app.terraform.io"
35+
organization = "${TFH_org}"
36+
workspaces {
37+
name = "${TFH_name}"
38+
}
39+
}
40+
}
41+
EOF
42+
43+
terraform init
44+
workspace_id=$(curl -s --header "Authorization: Bearer ${TFH_token}" --header "Content-Type: application/vnd.api+json" "https://app.terraform.io/api/v2/organizations/${TFH_org}/workspaces/${TFH_name}" | jq -r .data.id)
45+
46+
tfh pushvars -var "masterAuthPass=solstice-vault-021219" -var "masterAuthUser=solstice-k8s" -var "serviceAccount=k8s-vault" -var "project=${GOOGLE_PROJECT}" -var "region=$region" -var "zone=$zone" -var "cluster_name=${cluster_name}" -var "node_count=${node_count}" -var "machine_type=${machine_type}" -env-var "CONFIRM_DESTROY=1" -overwrite-all -dry-run false
47+
48+
echo "Setting new GOOGLE_CREDENTIALS from $GOOGLE_CREDENTIALS_PATH"
49+
export GOOGLE_CREDENTIALS=$(tr '\n' ' ' < $GOOGLE_CREDENTIALS_PATH | sed -e 's/\"/\\\\"/g' -e 's/\//\\\//g' -e 's/\\n/\\\\\\\\n/g')
50+
sed -e "s/my-key/GOOGLE_CREDENTIALS/" -e "s/my-hcl/false/" -e "s/my-value/${GOOGLE_CREDENTIALS}/" -e "s/my-category/env/" -e "s/my-sensitive/true/" -e "s/my-workspace-id/${workspace_id}/" < api_templates/variable.json.template > variable.json;
51+
curl --header "Authorization: Bearer ${TFH_token}" --header "Content-Type: application/vnd.api+json" --data @variable.json "https://app.terraform.io/api/v2/vars"
52+
rm -f variable.json
53+
54+
terraform $operation
55+
56+
echo "Sleeping 10 seconds before proceeding"
57+
58+
if [ $operation == "apply" ]; then
59+
60+
echo "Checking for existing context with this cluster name"
61+
context=$(kubectl config get-contexts | grep $cluster_name | awk '{print $2}')
62+
if [ ! -z $context ]; then
63+
echo "Deleting previous context: $context"
64+
kubectl config delete-context $context
65+
fi
66+
67+
echo "Generating kubeconfig"
68+
gcloud container clusters get-credentials $cluster_name --zone $zone --project $GOOGLE_PROJECT
69+
70+
context=$(kubectl config get-contexts | grep $cluster_name | awk '{print $2}')
71+
echo "Switching context to: $context"
72+
kubectl config use-context $context
73+
kubectl config current-context
74+
75+
echo "Dumping cluster-info:"
76+
kubectl cluster-info
77+
fi

k8sPlay/README.md

Lines changed: 62 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,62 @@
1+
# K8s Play
2+
3+
## Steps
4+
- deploy simple resources
5+
- deploy redis cache node
6+
- clean up
7+
8+
### deploy simple resources
9+
- look at the counting service app deployment definition
10+
```bash
11+
cat yaml-minimal/counting-deployment.yaml
12+
```
13+
- apply counting service via kubectl
14+
```bash
15+
kubectl apply -f yaml-minimal/counting-deployment.yaml
16+
```
17+
- check the log to verify all is well **NOTE**: should see Serving at http://localhost:9001 in the stdout
18+
```bash
19+
kubectl get pods
20+
```
21+
```bash
22+
kubectl logs <name of pod>
23+
```
24+
**NOTE**: spoiler alert; the pod's name is counting-minimal-pod
25+
- it's local, so we can leverage the power of k8s to forward the port...
26+
```bash
27+
kubectl port-forward pod/counting-minimal-pod 9001:9001
28+
```
29+
- then [look at it](http://localhost:9001)
30+
- now let's look at the node port
31+
```bash
32+
cat yaml-minimal/counting-node-port.yaml
33+
```
34+
- apply it with kubectl
35+
```bash
36+
kubectl apply -f yaml-minimal/counting-node-port.yaml
37+
```
38+
39+
### deploy Redis cache
40+
- look at the Redis cache definition
41+
```bash
42+
cat redis-cache/redis.yaml
43+
```
44+
- apply counting service via kubectl
45+
```bash
46+
kubectl apply -f redis-cache/redis.yaml
47+
```
48+
49+
- inspect the distribution pods
50+
```bash
51+
kubectl get pods -l app=redis -o wide
52+
```
53+
54+
Note that since we have specified 5 replicas of the cache pods, with a podAntiAffinity clause to ensures that a node runs one and only one Redis Pod. Therefore one or more pods might be in Pending status.
55+
56+
### clean up
57+
We don't need any of this anymore, it was just for fun...please be responsible and kill it!
58+
```bash
59+
kubectl delete -f yaml-minimal/counting-deployment.yaml && kubectl delete -f yaml-minimal/counting-node-port.yaml
60+
```
61+
62+
It's time to go back to lecture...

k8sPlay/redis-cache/redis.yaml

Lines changed: 41 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,41 @@
1+
apiVersion: v1
2+
kind: Service
3+
metadata:
4+
name: redis
5+
labels:
6+
app: redis
7+
spec:
8+
ports:
9+
- port: 6379
10+
name: redis
11+
targetPort: 6379
12+
selector:
13+
app: redis
14+
---
15+
apiVersion: apps/v1
16+
kind: Deployment
17+
metadata:
18+
name: redis
19+
spec:
20+
selector:
21+
matchLabels:
22+
app: redis
23+
replicas: 4
24+
template:
25+
metadata:
26+
labels:
27+
app: redis
28+
spec:
29+
affinity:
30+
podAntiAffinity:
31+
requiredDuringSchedulingIgnoredDuringExecution:
32+
- labelSelector:
33+
matchExpressions:
34+
- key: app
35+
operator: In
36+
values:
37+
- redis
38+
topologyKey: "kubernetes.io/hostname"
39+
containers:
40+
- name: redis-server
41+
image: redis:3.2-alpine

k8sPlay/redis-cache/redis.yaml~

Lines changed: 41 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,41 @@
1+
apiVersion: v1
2+
kind: Service
3+
metadata:
4+
name: redis
5+
labels:
6+
app: redis
7+
spec:
8+
ports:
9+
- port: 6379
10+
name: redis
11+
targetPort: 6379
12+
selector:
13+
app: redis
14+
---
15+
apiVersion: apps/v1
16+
kind: Deployment
17+
metadata:
18+
name: redis
19+
spec:
20+
selector:
21+
matchLabels:
22+
app: redis
23+
replicas: 3
24+
template:
25+
metadata:
26+
labels:
27+
app: redis
28+
spec:
29+
affinity:
30+
podAntiAffinity:
31+
requiredDuringSchedulingIgnoredDuringExecution:
32+
- labelSelector:
33+
matchExpressions:
34+
- key: app
35+
operator: In
36+
values:
37+
- redis
38+
topologyKey: "kubernetes.io/hostname"
39+
containers:
40+
- name: redis-server
41+
image: redis:3.2-alpine
Lines changed: 22 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,22 @@
1+
apiVersion: apps/v1
2+
kind: Deployment
3+
metadata:
4+
name: counting-service-app
5+
spec:
6+
replicas: 1
7+
selector:
8+
matchLabels:
9+
app: counting-service-app
10+
template:
11+
metadata:
12+
name: counting-service-app
13+
labels:
14+
app: counting-service-app
15+
version: v1
16+
spec:
17+
containers:
18+
- name: app
19+
image: hashicorp/counting-service:0.0.2
20+
imagePullPolicy: IfNotPresent
21+
ports:
22+
- containerPort: 9001
Lines changed: 16 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,16 @@
1+
apiVersion: v1
2+
kind: Service
3+
metadata:
4+
name: counting-service
5+
labels:
6+
app: counting-service-app
7+
version: v1
8+
spec:
9+
selector:
10+
app: counting-service-app
11+
version: v1
12+
ports:
13+
- name: http
14+
port: 80
15+
targetPort: 9001
16+
type: NodePort

0 commit comments

Comments
 (0)