Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Working CNI & Network policy #48

Merged
merged 2 commits into from
Jun 7, 2017
Merged

Working CNI & Network policy #48

merged 2 commits into from
Jun 7, 2017

Conversation

lewismarshall
Copy link
Contributor

Depends on merging nonlive/keto-k8#27 and new tag

@@ -215,11 +218,14 @@ coreos:
Environment="RKT_OPTS=\
--uuid-file-save=/var/run/kubelet-pod.uuid \
--volume etc-resolv,kind=host,source=/etc/resolv.conf --mount volume=etc-resolv,target=/etc/resolv.conf \
--volume etc-cni,kind=host,source=/etc/cni --mount volume=etc-cni,target=/etc/cni \
--volume=opt-cni,kind=host,source=/opt/cni,readOnly=true --mount volume=opt-cni,target=/opt/weave-net \
--volume var-log,kind=host,source=/var/log --mount volume=var-log,target=/var/log \
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

--volume=opt-cni is that equals intentional?

@lewismarshall
Copy link
Contributor Author

@Joseph-Irving not intentional but works either way - amended for consistency.

DefaultKubeVersion = "v1.6.3"
DefaultKubeVersion = "v1.6.4"
// DeafultNetworkProvider specifies what CNI provider to install
DeafultNetworkProvider = "canal"
// DefaultKetoK8Image specifies the image to use for keto-k8 container
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Deafult is spelt Default, however you have consistently misspelt it, so it will work

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Good point - we can't have that - pushed the de-typo-isation

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Fix the spelling :-)

DefaultKubeVersion = "v1.6.3"
DefaultKubeVersion = "v1.6.4"
// DeafultNetworkProvider specifies what CNI provider to install
DeafultNetworkProvider = "canal"
// DefaultKetoK8Image specifies the image to use for keto-k8 container
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Fix the spelling :-)

@lewismarshall
Copy link
Contributor Author

@vaijab spelling fixed.

@@ -215,11 +218,14 @@ coreos:
Environment="RKT_OPTS=\
--uuid-file-save=/var/run/kubelet-pod.uuid \
--volume etc-resolv,kind=host,source=/etc/resolv.conf --mount volume=etc-resolv,target=/etc/resolv.conf \
--volume etc-cni,kind=host,source=/etc/cni --mount volume=etc-cni,target=/etc/cni \
--volume opt-cni,kind=host,source=/opt/cni,readOnly=true --mount volume=opt-cni,target=/opt/weave-net \
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

DefaultNetworkProvider = "canal" says it's canal, but this line says it's weave?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This configuration is required only for weave as the weave binaries are not supplied with the default coreos hyperkube image and need to be mounted in specifically. The other binaries are supplied with the image and match with a working kubelet. See weaveworks/weave#2613

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I could add a flag to allow the use of all CNI providers as they all work with this configuration.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

flannel:

/ # kubectl -n kube-system get pods
NAME                                                                  READY     STATUS    RESTARTS   AGE
keto-tokens-297599340-j6h85                                           1/1       Running   0          6m
kube-apiserver-ip-172-31-13-220.eu-west-2.compute.internal            1/1       Running   0          6m
kube-apiserver-ip-172-31-16-220.eu-west-2.compute.internal            1/1       Running   0          5m
kube-controller-manager-ip-172-31-13-220.eu-west-2.compute.internal   1/1       Running   0          6m
kube-controller-manager-ip-172-31-16-220.eu-west-2.compute.internal   1/1       Running   1          6m
kube-dns-3913472980-r76k7                                             3/3       Running   0          6m
kube-flannel-ds-9ql1b                                                 2/2       Running   0          3m
kube-flannel-ds-mcj4z                                                 2/2       Running   0          6m
kube-flannel-ds-vgsf5                                                 2/2       Running   0          6m
kube-proxy-4mtpj                                                      1/1       Running   0          6m
kube-proxy-k0vrm                                                      1/1       Running   0          3m
kube-proxy-l86mm                                                      1/1       Running   0          6m
kube-scheduler-ip-172-31-13-220.eu-west-2.compute.internal            1/1       Running   0          6m
kube-scheduler-ip-172-31-16-220.eu-west-2.compute.internal            1/1       Running   0          5m

canal:

/ # kubectl -n kube-system get pods
NAME                                                                 READY     STATUS    RESTARTS   AGE
canal-8vc9n                                                          3/3       Running   0          1m
canal-k4vpw                                                          3/3       Running   0          3m
canal-m0mdv                                                          3/3       Running   0          4m
keto-tokens-2709979396-k6d05                                         1/1       Running   0          4m
kube-apiserver-ip-172-31-20-34.eu-west-2.compute.internal            1/1       Running   0          3m
kube-apiserver-ip-172-31-9-166.eu-west-2.compute.internal            1/1       Running   0          3m
kube-controller-manager-ip-172-31-20-34.eu-west-2.compute.internal   1/1       Running   0          3m
kube-controller-manager-ip-172-31-9-166.eu-west-2.compute.internal   1/1       Running   0          3m
kube-dns-3913472980-24gkm                                            3/3       Running   0          4m
kube-proxy-181db                                                     1/1       Running   0          4m
kube-proxy-jglbw                                                     1/1       Running   0          3m
kube-proxy-q8bt9                                                     1/1       Running   0          1m
kube-scheduler-ip-172-31-20-34.eu-west-2.compute.internal            1/1       Running   0          3m
kube-scheduler-ip-172-31-9-166.eu-west-2.compute.internal            1/1       Running   0          3m

weave:

/ # kubectl -n kube-system get pods
NAME                                                                  READY     STATUS    RESTARTS   AGE
keto-tokens-2625831197-2vkcs                                          1/1       Running   0          18m
kube-apiserver-ip-172-31-26-133.eu-west-2.compute.internal            1/1       Running   0          17m
kube-apiserver-ip-172-31-8-230.eu-west-2.compute.internal             1/1       Running   0          17m
kube-controller-manager-ip-172-31-26-133.eu-west-2.compute.internal   1/1       Running   0          17m
kube-controller-manager-ip-172-31-8-230.eu-west-2.compute.internal    1/1       Running   0          17m
kube-dns-3913472980-8hvrp                                             3/3       Running   0          18m
kube-proxy-nkq5j                                                      1/1       Running   0          15m
kube-proxy-pjs40                                                      1/1       Running   0          18m
kube-proxy-s246f                                                      1/1       Running   0          17m
kube-scheduler-ip-172-31-26-133.eu-west-2.compute.internal            1/1       Running   0          17m
kube-scheduler-ip-172-31-8-230.eu-west-2.compute.internal             1/1       Running   0          17m
weave-net-hqm3v                                                       2/2       Running   0          18m
weave-net-lgld7                                                       2/2       Running   0          15m
weave-net-pk96h                                                       2/2       Running   0          17m

@vaijab
Copy link
Contributor

vaijab commented May 26, 2017

Let's leave this PR open. I am not sure what the goal of this is at the moment.

@lewismarshall
Copy link
Contributor Author

lewismarshall commented May 26, 2017

I'll leave it with you @vaijab @jon-shanks - It's ready for people to easily prototype with canal (which I think is probably where we want to go) It's kernel level routing only (iptables), uses flannel for overlay and Calico enables a superset of the kubernetes network policy all using the kubernetes cluster API with calicoctl. We can even control egress! See ticket for updates: https://github.com/UKHomeOffice/application-container-platform-board/issues/328

@gambol99 @jaykeshur Build keto off this branch if you want calico (via. canal) working...

@lewismarshall lewismarshall merged commit aeaf9c9 into master Jun 7, 2017
@jaykeshur jaykeshur deleted the network_policy branch June 13, 2017 14:40
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants