You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: docs/getting-started-guides/ubuntu-calico.md
+27-9Lines changed: 27 additions & 9 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -35,7 +35,7 @@ Kubernetes Deployment On Bare-metal Ubuntu Nodes with Calico Networking
35
35
36
36
## Introduction
37
37
38
-
This document describes how to deploy Kubernetes on ubuntu bare metal nodes with Calico Networking plugin. See [projectcalico.org](http://projectcalico.org) for more information on what Calico is, and [the calicoctl github](https://github.com/projectcalico/calico-docker) for more information on the command-line tool, `calicoctl`.
38
+
This document describes how to deploy Kubernetes on Ubuntu bare metal nodes with Calico Networking plugin. See [projectcalico.org](http://projectcalico.org) for more information on what Calico is, and [the calicoctl github](https://github.com/projectcalico/calico-docker) for more information on the command-line tool, `calicoctl`.
39
39
40
40
This guide will set up a simple Kubernetes cluster with a master and two nodes. We will start the following processes with systemd:
41
41
@@ -54,7 +54,8 @@ On each Node:
54
54
## Prerequisites
55
55
56
56
1. This guide uses `systemd` and thus uses Ubuntu 15.04 which supports systemd natively.
57
-
2. All Kubernetes nodes should have the latest docker stable version installed. At the time of writing, that is Docker 1.7.0.
57
+
2. All machines should have the latest docker stable version installed. At the time of writing, that is Docker 1.7.0.
58
+
- To install docker, follow [these instructions](https://docs.docker.com/installation/ubuntulinux/)
58
59
3. All hosts should be able to communicate with each other, as well as the internet, to download the necessary files.
59
60
4. This demo assumes that none of the hosts have been configured with any Kubernetes or Calico software yet.
> *You may want to consider checking their status after to ensure everything is running.*
126
-
127
126
### Install Calico on Master
128
127
129
128
In order to allow the master to route to pods on our nodes, we will launch the calico-node daemon on our master. This will allow it to learn routes over BGP from the other calico-node daemons in the cluster. The docker daemon should already be running before calico is started.
>*You may want to consider checking their status after to ensure everything is running*
258
265
266
+
## Install the DNS Addon
267
+
268
+
Most Kubernetes deployments will require the DNS addon for service discovery. For more on DNS service discovery, check [here](../../cluster/addons/dns/).
269
+
270
+
The config repository for this guide comes with manifest files to start the DNS addon. To install DNS, do the following on your Master node.
271
+
272
+
Replace `<MASTER_IP>` in `calico-kubernetes-ubuntu-demo-master/dns/skydns-rc.yaml` with your Master's IP address. Then, create `skydns-rc.yaml` and `skydns-svc.yaml` using `kubectl create -f <FILE>`.
273
+
259
274
## Launch other Services With Calico-Kubernetes
260
275
261
276
At this point, you have a fully functioning cluster running on kubernetes with a master and 2 nodes networked with Calico. You can now follow any of the [standard documentation](../../examples/) to set up other services on your cluster.
@@ -268,12 +283,15 @@ With this sample configuration, because the containers have private `192.168.0.0
268
283
269
284
The simplest method for enabling connectivity from containers to the internet is to use an iptables masquerade rule. This is the standard mechanism [recommended](../../docs/admin/networking.md#google-compute-engine-gce) in the Kubernetes GCE environment.
270
285
271
-
We need to NAT traffic that has a destination outside of the cluster. Internal traffic includes the master/nodes, and the container IP pools. Assuming that the master and nodes are in the `172.25.0.0/24` subnet, the cbr0 IP ranges are all in the `192.168.0.0/16` network, and the nodes use the interface `eth0` for external connectivity, a suitable masquerade chain would look like this:
286
+
We need to NAT traffic that has a destination outside of the cluster. Internal traffic includes the master/nodes, and the container IP pools. A suitable masquerade chain would follow the pattern below, replacing the following variables:
287
+
-`CONTAINER_SUBNET`: The cluster-wide subnet from which container IPs are chosen. All cbr0 bridge subnets fall within this range. The above example uses `192.168.0.0/16`.
288
+
-`KUBERNETES_HOST_SUBNET`: The subnet from which Kubernetes node / master IP addresses have been chosen.
289
+
-`HOST_INTERFACE`: The interface on the Kubernetes node which is used for external connectivity. The above example uses `eth0`
0 commit comments