Skip to content

Commit 4fe9416

Browse files
author
Quinton Hoole
committed
Merge pull request kubernetes#13477 from caseydavenport/update-calico-ubuntu-docs
Update calico ubuntu docs
2 parents be19554 + 3c4ab92 commit 4fe9416

File tree

1 file changed

+27
-9
lines changed

1 file changed

+27
-9
lines changed

docs/getting-started-guides/ubuntu-calico.md

Lines changed: 27 additions & 9 deletions
Original file line numberDiff line numberDiff line change
@@ -35,7 +35,7 @@ Kubernetes Deployment On Bare-metal Ubuntu Nodes with Calico Networking
3535

3636
## Introduction
3737

38-
This document describes how to deploy Kubernetes on ubuntu bare metal nodes with Calico Networking plugin. See [projectcalico.org](http://projectcalico.org) for more information on what Calico is, and [the calicoctl github](https://github.com/projectcalico/calico-docker) for more information on the command-line tool, `calicoctl`.
38+
This document describes how to deploy Kubernetes on Ubuntu bare metal nodes with Calico Networking plugin. See [projectcalico.org](http://projectcalico.org) for more information on what Calico is, and [the calicoctl github](https://github.com/projectcalico/calico-docker) for more information on the command-line tool, `calicoctl`.
3939

4040
This guide will set up a simple Kubernetes cluster with a master and two nodes. We will start the following processes with systemd:
4141

@@ -54,7 +54,8 @@ On each Node:
5454
## Prerequisites
5555

5656
1. This guide uses `systemd` and thus uses Ubuntu 15.04 which supports systemd natively.
57-
2. All Kubernetes nodes should have the latest docker stable version installed. At the time of writing, that is Docker 1.7.0.
57+
2. All machines should have the latest docker stable version installed. At the time of writing, that is Docker 1.7.0.
58+
- To install docker, follow [these instructions](https://docs.docker.com/installation/ubuntulinux/)
5859
3. All hosts should be able to communicate with each other, as well as the internet, to download the necessary files.
5960
4. This demo assumes that none of the hosts have been configured with any Kubernetes or Calico software yet.
6061

@@ -122,8 +123,6 @@ sudo systemctl start kube-controller-manager.service
122123
sudo systemctl start kube-scheduler.service
123124
```
124125

125-
> *You may want to consider checking their status after to ensure everything is running.*
126-
127126
### Install Calico on Master
128127

129128
In order to allow the master to route to pods on our nodes, we will launch the calico-node daemon on our master. This will allow it to learn routes over BGP from the other calico-node daemons in the cluster. The docker daemon should already be running before calico is started.
@@ -176,6 +175,7 @@ sudo mv -f network-environment /etc
176175
Instead of using docker's default interface (docker0), we will configure a new one to use desired IP ranges
177176

178177
```
178+
sudo apt-get install -y bridge-utils
179179
sudo brctl addbr cbr0
180180
sudo ifconfig cbr0 up
181181
sudo ifconfig cbr0 <IP>/24
@@ -197,9 +197,12 @@ The Docker daemon must be started and told to use the already configured cbr0 in
197197

198198
2.) Find the line that reads `ExecStart=/usr/bin/docker -d -H fd://` and append the following flags: `--bridge=cbr0 --iptables=false --ip-masq=false`
199199

200-
3.) Reload systemctl with `sudo systemctl daemon-reload`
200+
3.) Reload systemctl and restart docker.
201201

202-
4.) Restart docker with with `sudo systemctl restart docker`
202+
```
203+
sudo systemctl daemon-reload
204+
sudo systemctl restart docker
205+
```
203206

204207
### Install Calico on the Node
205208

@@ -241,6 +244,10 @@ kubernetes/cluster/ubuntu/build.sh
241244
242245
# Add binaries to /usr/bin
243246
sudo cp -f binaries/minion/* /usr/bin
247+
248+
# Get the iptables based kube-proxy reccomended for this demo
249+
sudo wget https://github.com/projectcalico/calico-kubernetes/releases/download/v0.1.1/kube-proxy -P /usr/bin/
250+
sudo chmod +x /usr/bin/kube-proxy
244251
```
245252

246253
2.) Install and launch the sample systemd processes settings for launching Kubernetes services
@@ -256,6 +263,14 @@ sudo systemctl start kube-kubelet.service
256263

257264
>*You may want to consider checking their status after to ensure everything is running*
258265
266+
## Install the DNS Addon
267+
268+
Most Kubernetes deployments will require the DNS addon for service discovery. For more on DNS service discovery, check [here](../../cluster/addons/dns/).
269+
270+
The config repository for this guide comes with manifest files to start the DNS addon. To install DNS, do the following on your Master node.
271+
272+
Replace `<MASTER_IP>` in `calico-kubernetes-ubuntu-demo-master/dns/skydns-rc.yaml` with your Master's IP address. Then, create `skydns-rc.yaml` and `skydns-svc.yaml` using `kubectl create -f <FILE>`.
273+
259274
## Launch other Services With Calico-Kubernetes
260275

261276
At this point, you have a fully functioning cluster running on kubernetes with a master and 2 nodes networked with Calico. You can now follow any of the [standard documentation](../../examples/) to set up other services on your cluster.
@@ -268,12 +283,15 @@ With this sample configuration, because the containers have private `192.168.0.0
268283

269284
The simplest method for enabling connectivity from containers to the internet is to use an iptables masquerade rule. This is the standard mechanism [recommended](../../docs/admin/networking.md#google-compute-engine-gce) in the Kubernetes GCE environment.
270285

271-
We need to NAT traffic that has a destination outside of the cluster. Internal traffic includes the master/nodes, and the container IP pools. Assuming that the master and nodes are in the `172.25.0.0/24` subnet, the cbr0 IP ranges are all in the `192.168.0.0/16` network, and the nodes use the interface `eth0` for external connectivity, a suitable masquerade chain would look like this:
286+
We need to NAT traffic that has a destination outside of the cluster. Internal traffic includes the master/nodes, and the container IP pools. A suitable masquerade chain would follow the pattern below, replacing the following variables:
287+
- `CONTAINER_SUBNET`: The cluster-wide subnet from which container IPs are chosen. All cbr0 bridge subnets fall within this range. The above example uses `192.168.0.0/16`.
288+
- `KUBERNETES_HOST_SUBNET`: The subnet from which Kubernetes node / master IP addresses have been chosen.
289+
- `HOST_INTERFACE`: The interface on the Kubernetes node which is used for external connectivity. The above example uses `eth0`
272290

273291
```
274292
sudo iptables -t nat -N KUBE-OUTBOUND-NAT
275-
sudo iptables -t nat -A KUBE-OUTBOUND-NAT -d 192.168.0.0/16 -o eth0 -j RETURN
276-
sudo iptables -t nat -A KUBE-OUTBOUND-NAT -d 172.25.0.0/24 -o eth0 -j RETURN
293+
sudo iptables -t nat -A KUBE-OUTBOUND-NAT -d <CONTAINER_SUBNET> -o <HOST_INTERFACE> -j RETURN
294+
sudo iptables -t nat -A KUBE-OUTBOUND-NAT -d <KUBERNETES_HOST_SUBNET> -o <HOST_INTERFACE> -j RETURN
277295
sudo iptables -t nat -A KUBE-OUTBOUND-NAT -j MASQUERADE
278296
sudo iptables -t nat -A POSTROUTING -j KUBE-OUTBOUND-NAT
279297
```

0 commit comments

Comments
 (0)