Skip to content

Commit

Permalink
PR for CNI custom network chapter (aws-samples#292)
Browse files Browse the repository at this point in the history
* adding monitoring using prometheus & grafana chapter

* changing draft: false

* minor change to grafana dashboards

* adding challenge question & offering solution

* formatting changes - removed  for re-emphasizing

* minor change with relative links

* fixing typo in health checks chapter

* minor improvements in health check chapter

* minor changes to healthcheck chapter

* added challenge question to network policy

* minor modification to add shortcut to ssh key link

* reordering pages based on RI agenda

* minor modification to /content/prerequisities/aws_event/portal.md

* reordering pages based on workshop feedback & merging helm install into single doc

* fixing bugs with calico cleanup and directional traffic and simplified instructions with sshkey

* fixing dashboard error

* adding custom networking tutorial under advanced networking section

* minor additions to custom networking

* minor addition to cleanup

* minor improvements to custom networking based on feedback

* minor improvements to custom networking based on feedback
  • Loading branch information
dalbhanj authored and brentley committed Mar 7, 2019
1 parent 4627fe1 commit 1854330
Show file tree
Hide file tree
Showing 7 changed files with 380 additions and 0 deletions.
9 changes: 9 additions & 0 deletions content/advanced-networking/_index.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,9 @@
---
title: "Advanced VPC Networking with EKS"
chapter: true
weight: 55
---

# Advanced VPC Networking with EKS

In this Chapter, we will review some of the advanced VPC networking features with EKS.
11 changes: 11 additions & 0 deletions content/advanced-networking/secondary_cidr/_index.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,11 @@
---
title: "Using Secondary CIDRs with EKS"
chapter: true
weight: 10
---

# Using Secondary CIDRs with EKS

You can expand your VPC network by adding additional CIDR ranges. This capability can be used if you are running out of IP ranges within your existing VPC or if you have consumed all available RFC 1918 CIDR ranges within your corporate network. EKS supports additional IPv4 CIDR blocks in the 100.64.0.0/10 and 198.19.0.0/16 ranges. You can review this announcement from our [what's new blog](https://aws.amazon.com/about-aws/whats-new/2018/10/amazon-eks-now-supports-additional-vpc-cidr-blocks/)

In this tutorial, we will walk you through the configuration that is needed so that you can launch your Pod networking on top of secondary CIDRs
55 changes: 55 additions & 0 deletions content/advanced-networking/secondary_cidr/cleanup.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,55 @@
---
title: "Cleanup"
date: 2019-03-02T16:47:38-05:00
weight: 60
---
Let's cleanup this tutorial

```
kubectl delete deployments --all
```
Edit aws-node configmap and comment AWS_VPC_K8S_CNI_CUSTOM_NETWORK_CFG and its value
```
kubectl edit daemonset -n kube-system aws-node
```
```
...
spec:
containers:
- env:
#- name: AWS_VPC_K8S_CNI_CUSTOM_NETWORK_CFG
# value: "true"
- name: AWS_VPC_K8S_CNI_LOGLEVEL
value: DEBUG
- name: MY_NODE_NAME
...
```
Delete custom resource objects from ENIConfig CRD
```
kubectl delete eniconfig/group1-pod-netconfig
kubectl delete eniconfig/group2-pod-netconfig
kubectl delete eniconfig/group3-pod-netconfig
```
Terminate EC2 instances so that fresh instances are launched with default CNI configuration

{{% notice warning %}}
Use caution before you run the next command because it terminates all worker nodes including running pods in your workshop
{{% /notice %}}

```
INSTANCE_IDS=(`aws ec2 describe-instances --query 'Reservations[*].Instances[*].InstanceId' --filters "Name=tag:Name,Values=eksworkshop*" --output text` )
for i in "${INSTANCE_IDS[@]}"
do
echo "Terminating EC2 instance $i ..."
aws ec2 terminate-instances --instance-ids $i
done
```
Delete secondary CIDR from your VPC
```
VPC_ID=$(aws ec2 describe-vpcs --filters Name=tag:Name,Values=eksctl-eksworkshop* | jq -r '.Vpcs[].VpcId')
ASSOCIATION_ID=$(aws ec2 describe-vpcs --vpc-id $VPC_ID | jq -r '.Vpcs[].CidrBlockAssociationSet[] | select(.CidrBlock == "100.64.0.0/16") | .AssociationId')
aws ec2 delete-subnet --subnet-id $CGNAT_SNET1
aws ec2 delete-subnet --subnet-id $CGNAT_SNET2
aws ec2 delete-subnet --subnet-id $CGNAT_SNET3
aws ec2 disassociate-vpc-cidr-block --association-id $ASSOCIATION_ID
```
61 changes: 61 additions & 0 deletions content/advanced-networking/secondary_cidr/configure-cni.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,61 @@
---
title: "Configure CNI"
date: 2019-02-13T01:12:49-05:00
weight: 30
---

Before we start making changes to VPC CNI, let's make sure we are using latest CNI version

Run this command to find CNI version

```
kubectl describe daemonset aws-node --namespace kube-system | grep Image | cut -d "/" -f 2
```
Here is a sample response
```
amazon-k8s-cni:1.2.1
```
Upgrade version to 1.3 if you have an older version
```
kubectl apply -f https://raw.githubusercontent.com/aws/amazon-vpc-cni-k8s/master/config/v1.3/aws-k8s-cni.yaml
```
Wait till all the pods are recycled. You can check the status of pods by using this command
```
kubectl get pods -n kube-system -w
```
### Configure Custom networking

Edit aws-node configmap and add AWS_VPC_K8S_CNI_CUSTOM_NETWORK_CFG environment variable to the node container spec and set it to true

Note: You only need to add two lines into configmap
```
kubectl edit daemonset -n kube-system aws-node
```
```
...
spec:
containers:
- env:
- name: AWS_VPC_K8S_CNI_CUSTOM_NETWORK_CFG
value: "true"
- name: AWS_VPC_K8S_CNI_LOGLEVEL
value: DEBUG
- name: MY_NODE_NAME
...
```
Save the file and exit your text editor

Terminate worker nodes so that Autoscaling launches newer nodes that come bootstrapped with custom network config

{{% notice warning %}}
Use caution before you run the next command because it terminates all worker nodes including running pods in your workshop
{{% /notice %}}

```
INSTANCE_IDS=(`aws ec2 describe-instances --query 'Reservations[*].Instances[*].InstanceId' --filters "Name=tag:Name,Values=eksworkshop*" --output text` )
for i in "${INSTANCE_IDS[@]}"
do
echo "Terminating EC2 instance $i ..."
aws ec2 terminate-instances --instance-ids $i
done
```
122 changes: 122 additions & 0 deletions content/advanced-networking/secondary_cidr/eniconfig_crd.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,122 @@
---
title: "Create CRDs"
date: 2019-03-02T12:47:43-05:00
weight: 40
---

### Create custom resources for ENIConfig CRD
As next step, we will add custom resources to ENIConfig custom resource definition (CRD). CRD's are extensions of Kubernetes API that stores collection of API objects of certain kind. In this case, we will store VPC Subnet and SecurityGroup configuration information in these CRD's so that Worker nodes can access them to configure VPC CNI plugin.

You should have ENIConfig CRD already installed with latest CNI version (1.3+). You can check if its installed by running this command.
```
kubectl get crd
```
You should see response similar to this
```
NAME CREATED AT
eniconfigs.crd.k8s.amazonaws.com 2019-03-07T20:06:48Z
```
If you don't have ENIConfig installed, you can install it by using this command
```
kubectl apply -f https://raw.githubusercontent.com/aws/amazon-vpc-cni-k8s/master/config/v1.3/aws-k8s-cni.yaml
```
Create custom resources for each subnet by replacing **Subnet** and **SecurityGroup IDs**. Since we created three secondary subnets, we need create three custom resources.

Here is the template for custom resource. Notice the values for Subnet ID and SecurityGroup ID needs to be replaced with appropriate values
```
apiVersion: crd.k8s.amazonaws.com/v1alpha1
kind: ENIConfig
metadata:
name: group1-pod-netconfig
spec:
subnet: $SUBNETID1
securityGroups:
- $SECURITYGROUPID1
- $SECURITYGROUPID2
```
Check the AZ's and Subnet IDs for these subnets. Make note of AZ info as you will need this when you apply annotation to Worker nodes using custom network config
```
aws ec2 describe-subnets --filters "Name=cidr-block,Values=100.64.*" --query 'Subnets[*].[CidrBlock,SubnetId,AvailabilityZone]' --output table
```
```
--------------------------------------------------------------
| DescribeSubnets |
+-----------------+----------------------------+-------------+
| 100.64.32.0/19 | subnet-07dab05836e4abe91 | us-east-2a |
| 100.64.64.0/19 | subnet-0692cd08cc4df9b6a | us-east-2c |
| 100.64.0.0/19 | subnet-04f960ffc8be6865c | us-east-2b |
+-----------------+----------------------------+-------------+
```
Check your Worker Node SecurityGroup
```
INSTANCE_IDS=(`aws ec2 describe-instances --query 'Reservations[*].Instances[*].InstanceId' --filters "Name=tag:Name,Values=eksworkshop*" --output text`)
for i in "${INSTANCE_IDS[@]}"
do
echo "SecurityGroup for EC2 instance $i ..."
aws ec2 describe-instances --instance-ids $INSTANCE_IDS | jq -r '.Reservations[].Instances[].SecurityGroups[].GroupId'
done
```
```
SecurityGroup for EC2 instance i-03ea1a083c924cd78 ...
sg-070d03008bda531ad
sg-06e5cab8e5d6f16ef
SecurityGroup for EC2 instance i-0a635aed890c7cc3e ...
sg-070d03008bda531ad
sg-06e5cab8e5d6f16ef
SecurityGroup for EC2 instance i-048e5ec8815e5ea8a ...
sg-070d03008bda531ad
sg-06e5cab8e5d6f16ef
```
Create custom resource **group1-pod-netconfig.yaml** for first subnet (100.64.0.0/19). Replace the SubnetId and SecuritGroupIds with the values from above. Here is how it looks with the configuration values for my environment

Note: We are using same SecurityGroup for pods as your Worker Nodes but you can change these and use custom SecurityGroups for your Pod Networking

```
apiVersion: crd.k8s.amazonaws.com/v1alpha1
kind: ENIConfig
metadata:
name: group1-pod-netconfig
spec:
subnet: subnet-04f960ffc8be6865c
securityGroups:
- sg-070d03008bda531ad
- sg-06e5cab8e5d6f16ef
```
Create custom resource **group2-pod-netconfig.yaml** for second subnet (100.64.32.0/19). Replace the SubnetId and SecuritGroupIds as above.

Similarly, create custom resource **group3-pod-netconfig.yaml** for third subnet (100.64.64.0/19). Replace the SubnetId and SecuritGroupIds as above.

Check the instance details using this command as you will need AZ info when you apply annotation to Worker nodes using custom network config
```
aws ec2 describe-instances --filters "Name=tag:Name,Values=eksworkshop*" --query 'Reservations[*].Instances[*].[PrivateDnsName,Tags[?Key==`Name`].Value|[0],Placement.AvailabilityZone,PrivateIpAddress,PublicIpAddress]' --output table
```
```
------------------------------------------------------------------------------------------------------------------------------------------
| DescribeInstances |
+-----------------------------------------------+---------------------------------------+-------------+-----------------+----------------+
| ip-192-168-9-228.us-east-2.compute.internal | eksworkshop-eksctl-ng-475d4bc8-Node | us-east-2c | 192.168.9.228 | 18.191.57.131 |
| ip-192-168-71-211.us-east-2.compute.internal | eksworkshop-eksctl-ng-475d4bc8-Node | us-east-2a | 192.168.71.211 | 18.221.77.249 |
| ip-192-168-33-135.us-east-2.compute.internal | eksworkshop-eksctl-ng-475d4bc8-Node | us-east-2b | 192.168.33.135 | 13.59.167.90 |
+-----------------------------------------------+---------------------------------------+-------------+-----------------+----------------+
```

Apply the CRD's
```
kubectl apply -f group1-pod-netconfig.yaml
kubectl apply -f group2-pod-netconfig.yaml
kubectl apply -f group3-pod-netconfig.yaml
```
As last step, we will annotate nodes with custom network configs.

{{% notice warning %}}
Be sure to annotate the instance with config that matches correct AZ. For ex, in my environment instance ip-192-168-33-135.us-east-2.compute.internal is in us-east-2b. So, I will apply **group1-pod-netconfig.yaml** to this instance. Similarly, I will apply **group2-pod-netconfig.yaml** to ip-192-168-71-211.us-east-2.compute.internal and **group3-pod-netconfig.yaml** to ip-192-168-9-228.us-east-2.compute.internal
{{% /notice %}}

```
kubectl annotate node <nodename>.<region>.compute.internal k8s.amazonaws.com/eniConfig=group1-pod-netconfig
```
As an example, here is what I would run in my environment
```
kubectl annotate node ip-192-168-33-135.us-east-2.compute.internal k8s.amazonaws.com/eniConfig=group1-pod-netconfig
```
You should now see secondary IP address from extended CIDR assigned to annotated nodes.
78 changes: 78 additions & 0 deletions content/advanced-networking/secondary_cidr/prerequisites.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,78 @@
---
title: "Prerequisites"
date: 2019-02-08T00:35:29-05:00
weight: 20
---

Before we configure EKS, we need to enable secondary CIDR blocks in your VPC and make sure they have proper tags and route table configurations

### Add secondary CIDRs to your VPC

{{% notice info %}}
There are restrictions on the range of secondary CIDRs you can use to extend your VPC. For more info, see [IPv4 CIDR Block Association Restrictions](https://docs.aws.amazon.com/vpc/latest/userguide/VPC_Subnets.html#add-cidr-block-restrictions)
{{% /notice %}}

You can use below commands to add 100.64.0.0/16 to your EKS cluster VPC. Please note to change the Values parameter to EKS cluster name if you used different name than eksctl-eksworkshop
```
VPC_ID=$(aws ec2 describe-vpcs --filters Name=tag:Name,Values=eksctl-eksworkshop* | jq -r '.Vpcs[].VpcId')
aws ec2 associate-vpc-cidr-block --vpc-id $VPC_ID --cidr-block 100.64.0.0/16
```
Next step is to create subnets. Before we do this step, let's check how many subnets we are consuming. You can run this command to see EC2 instance and AZ details

```
aws ec2 describe-instances --filters "Name=tag:Name,Values=eksworkshop*" --query 'Reservations[*].Instances[*].[PrivateDnsName,Tags[?Key==`Name`].Value|[0],Placement.AvailabilityZone,PrivateIpAddress,PublicIpAddress]' --output table
```
```
------------------------------------------------------------------------------------------------------------------------------------------
| DescribeInstances |
+-----------------------------------------------+---------------------------------------+-------------+-----------------+----------------+
| ip-192-168-9-228.us-east-2.compute.internal | eksworkshop-eksctl-ng-475d4bc8-Node | us-east-2c | 192.168.9.228 | 18.191.57.131 |
| ip-192-168-71-211.us-east-2.compute.internal | eksworkshop-eksctl-ng-475d4bc8-Node | us-east-2a | 192.168.71.211 | 18.221.77.249 |
| ip-192-168-33-135.us-east-2.compute.internal | eksworkshop-eksctl-ng-475d4bc8-Node | us-east-2b | 192.168.33.135 | 13.59.167.90 |
+-----------------------------------------------+---------------------------------------+-------------+-----------------+----------------+
```

I have 3 instances and using 3 subnets in my environment. For simplicity, we will use the same AZ's and create 3 secondary CIDR subnets but you can certainly customize according to your networking requirements. Remember to change the AZ names according to your environment
```
export AZ1=us-east-2a
export AZ2=us-east-2b
export AZ3=us-east-2c
CGNAT_SNET1=$(aws ec2 create-subnet --cidr-block 100.64.0.0/19 --vpc-id $VPC_ID --availability-zone $AZ1 | jq -r .Subnet.SubnetId)
CGNAT_SNET2=$(aws ec2 create-subnet --cidr-block 100.64.32.0/19 --vpc-id $VPC_ID --availability-zone $AZ2 | jq -r .Subnet.SubnetId)
CGNAT_SNET3=$(aws ec2 create-subnet --cidr-block 100.64.64.0/19 --vpc-id $VPC_ID --availability-zone $AZ3 | jq -r .Subnet.SubnetId)
```
Next step is to add Kubernetes tags on newer Subnets. You can check these tags by querying your current subnets
```
aws ec2 describe-subnets --filters Name=cidr-block,Values=192.168.0.0/19 --output text
```
Output shows similar to this
```
TAGS aws:cloudformation:logical-id SubnetPublicUSEAST2C
TAGS kubernetes.io/role/elb 1
TAGS eksctl.cluster.k8s.io/v1alpha1/cluster-name eksworkshop-eksctl
TAGS Name eksctl-eksworkshop-eksctl-cluster/SubnetPublicUSEAST2C
TAGS aws:cloudformation:stack-name eksctl-eksworkshop-eksctl-cluster
TAGS kubernetes.io/cluster/eksworkshop-eksctl shared
TAGS aws:cloudformation:stack-id arn:aws:cloudformation:us-east-2:012345678901:stack/eksctl-eksworkshop-eksctl-cluster/8da51fc0-2b5e-11e9-b535-022c6f51bf82
```
Here are the commands to add tags to both the subnets
```
aws ec2 create-tags --resources $CGNAT_SNET1 --tags Key=eksctl.cluster.k8s.io/v1alpha1/cluster-name,Value=eksworkshop-eksctl
aws ec2 create-tags --resources $CGNAT_SNET1 --tags Key=kubernetes.io/cluster/eksworkshop-eksctl,Value=shared
aws ec2 create-tags --resources $CGNAT_SNET1 --tags Key=kubernetes.io/role/elb,Value=1
aws ec2 create-tags --resources $CGNAT_SNET2 --tags Key=eksctl.cluster.k8s.io/v1alpha1/cluster-name,Value=eksworkshop-eksctl
aws ec2 create-tags --resources $CGNAT_SNET2 --tags Key=kubernetes.io/cluster/eksworkshop-eksctl,Value=shared
aws ec2 create-tags --resources $CGNAT_SNET2 --tags Key=kubernetes.io/role/elb,Value=1
aws ec2 create-tags --resources $CGNAT_SNET3 --tags Key=eksctl.cluster.k8s.io/v1alpha1/cluster-name,Value=eksworkshop-eksctl
aws ec2 create-tags --resources $CGNAT_SNET3 --tags Key=kubernetes.io/cluster/eksworkshop-eksctl,Value=shared
aws ec2 create-tags --resources $CGNAT_SNET3 --tags Key=kubernetes.io/role/elb,Value=1
```
As next step, we need to associate three new subnets into a route table. Again for simplicity, we chose to add new subnets to the Public route table that has connectivity to Internet Gateway
```
SNET1=$(aws ec2 describe-subnets --filters Name=cidr-block,Values=192.168.0.0/19 | jq -r .Subnets[].SubnetId)
RTASSOC_ID=$(aws ec2 describe-route-tables --filters Name=association.subnet-id,Values=$SNET1 | jq -r .RouteTables[].RouteTableId)
aws ec2 associate-route-table --route-table-id $RTASSOC_ID --subnet-id $CGNAT_SNET1
aws ec2 associate-route-table --route-table-id $RTASSOC_ID --subnet-id $CGNAT_SNET2
aws ec2 associate-route-table --route-table-id $RTASSOC_ID --subnet-id $CGNAT_SNET3
```
44 changes: 44 additions & 0 deletions content/advanced-networking/secondary_cidr/test_networking.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,44 @@
---
title: "Test Networking"
date: 2019-03-02T15:18:32-05:00
weight: 50
---

### Launch pods into Secondary CIDR network

Let's launch few pods and test networking
```
kubectl run nginx --image=nginx
kubectl scale --replicas=3 deployments/nginx
kubectl expose deployment/nginx --type=NodePort --port 80
kubectl get pods -o wide
```
```
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE
nginx-64f497f8fd-k962k 1/1 Running 0 40m 100.64.6.147 ip-192-168-52-113.us-east-2.compute.internal <none>
nginx-64f497f8fd-lkslh 1/1 Running 0 40m 100.64.53.10 ip-192-168-74-125.us-east-2.compute.internal <none>
nginx-64f497f8fd-sgz6f 1/1 Running 0 40m 100.64.80.186 ip-192-168-26-65.us-east-2.compute.internal <none>
```
You can use busybox pod and ping pods within same host or across hosts using IP address

```
kubectl run -i --rm --tty debug --image=busybox -- sh
```
Test access to internet and to nginx service
```
# connect to internet
/ # wget google.com -O -
Connecting to google.com (172.217.5.238:80)
Connecting to www.google.com (172.217.5.228:80)
<!doctype html><html itemscope="" itemtype="http://schema.org/WebPage" lang="en"><head><meta content="Search the world's information, including webpages, images, videos and more. Google has many special
...
# connect to service (testing core-dns)
/ # wget nginx -O -
Connecting to nginx (10.100.170.156:80)
<!DOCTYPE html>
<html>
<head>
<title>Welcome to nginx!</title>
...
```

0 comments on commit 1854330

Please sign in to comment.