Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Docker Driver Using Multi-Node Clusters & NodePort #9364

Closed
mativillagra opened this issue Sep 30, 2020 · 7 comments
Closed

Docker Driver Using Multi-Node Clusters & NodePort #9364

mativillagra opened this issue Sep 30, 2020 · 7 comments
Labels
co/docker-driver Issues related to kubernetes in container co/multinode Issues related to multinode clusters kind/support Categorizes issue or PR as a support question.

Comments

@mativillagra
Copy link

mativillagra commented Sep 30, 2020

I follow this documentation I execute the following command's

https://minikube.sigs.k8s.io/docs/tutorials/multi_node/

[matias@localhost multi-node-cluster]$ minikube start --nodes 2
πŸ˜„  minikube v1.13.1 on Centos 8.2.2004
✨  Automatically selected the docker driver

This bring the multi-node cluster up on my local host successfully,

Advice that the driver selected is docker.

[matias@localhost multi-node-cluster]$ minikube version
minikube version: v1.13.1
commit: 1fd1f67f338cbab4b3e5a6e4c71c551f522ca138-dirty

[matias@localhost multi-node-cluster]$ k get no -o wide
NAME           STATUS   ROLES    AGE   VERSION   INTERNAL-IP   EXTERNAL-IP   OS-IMAGE           KERNEL-VERSION                 CONTAINER-RUNTIME
minikube       Ready    master   80s   v1.19.2   172.17.0.3    <none>        Ubuntu 20.04 LTS   4.18.0-193.19.1.el8_2.x86_64   docker://19.3.8
minikube-m02   Ready    <none>   51s   v1.19.2   172.17.0.4    <none>        Ubuntu 20.04 LTS   4.18.0-193.19.1.el8_2.x86_64   docker://19.3.8

[matias@localhost multi-node-cluster]$ minikube ip
172.17.0.3

I apply the two *.yaml files on the cluster, as in here with no changes.

https://minikube.sigs.k8s.io/docs/tutorials/multi_node/

[matias@localhost multi-node-cluster]$ kubectl apply -f hello-deployment.yaml
deployment.apps/hello created

[matias@localhost multi-node-cluster]$ kubectl apply -f hello-svc.yaml 
service/hello created

[matias@localhost multi-node-cluster]$ k get po -o wide
NAME                    READY   STATUS    RESTARTS   AGE   IP           NODE           NOMINATED NODE   READINESS GATES
hello-f45cbcf6d-95lsx   1/1     Running   0          38s   172.18.0.2   minikube-m02   <none>           <none>
hello-f45cbcf6d-trb4h   1/1     Running   0          38s   172.18.0.3   minikube       <none>           <none>

[matias@localhost multi-node-cluster]$ k describe svc hello
Name:                     hello
Namespace:                default
Labels:                   <none>
Annotations:              <none>
Selector:                 app=hello
Type:                     NodePort
IP:                       10.110.20.70
Port:                     <unset>  80/TCP
TargetPort:               http/TCP
NodePort:                 <unset>  31000/TCP
Endpoints:                172.18.0.2:80,172.18.0.3:80
Session Affinity:         None
External Traffic Policy:  Cluster
Events:                   <none>

[matias@localhost multi-node-cluster]$ k get svc
NAME         TYPE        CLUSTER-IP     EXTERNAL-IP   PORT(S)        AGE
hello        NodePort    10.110.20.70   <none>        80:31000/TCP   26m
kubernetes   ClusterIP   10.96.0.1      <none>        443/TCP        30m

[matias@localhost multi-node-cluster]$ minikube service list
|-------------|------------|--------------|-------------------------|
|  NAMESPACE  |    NAME    | TARGET PORT  |           URL           |
|-------------|------------|--------------|-------------------------|
| default     | hello      |           80 | http://172.17.0.3:31000 |
| default     | kubernetes | No node port |
| kube-system | kube-dns   | No node port |
|-------------|------------|--------------|-------------------------|

My question & problem is that if I curl the http://172.17.0.3:31000 always result on the following respond from one pod it never's jump to the other pod, Can you explain me why, is this a bug/know issues using the docker driver with multi-nodes on Minikube.

matias@localhost multi-node-cluster]$ curl http://172.17.0.3:31000
Hello from hello-f45cbcf6d-trb4h (172.18.0.3)

Thanks in advance.

@k8s-ci-robot
Copy link
Contributor

@RA489: The label(s) triage/support cannot be applied, because the repository doesn't have them

In response to this:

/triage support

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

@RA489 RA489 added co/docker-driver Issues related to kubernetes in container co/multinode Issues related to multinode clusters labels Oct 1, 2020
@RA489
Copy link

RA489 commented Oct 1, 2020

/kind support

@k8s-ci-robot k8s-ci-robot added the kind/support Categorizes issue or PR as a support question. label Oct 1, 2020
@tstromberg
Copy link
Contributor

I believe that #9875 fixes this.

@sadlil
Copy link
Contributor

sadlil commented Dec 9, 2020

@mativillagra For a work around for now use minikube start -n 2 -p p1 --cni=kindnet --extra-config=kubeadm.pod-network-cidr=10.244.0.0/16 to start your cluster.

See - #9838.

@mativillagra
Copy link
Author

@sadlil Thanks I will try it out respond back, sorry for the late notice on your comment.

@sharifelgamal
Copy link
Collaborator

Yes, this should be fixed in the latest version of minikube. Please reopen this issue if you are still running into it.

@mativillagra
Copy link
Author

Hello!

Paste here my result, but as you point out It works now!

matias@matias:~ $ minikube version
minikube version: v1.17.1
commit: 043bdca07e54ab6e4fc0457e3064048f34133d7e

The only difference with the above configuration is the version of minikube used, everything is the same this means same deployments are used.

matias@matias:~ $ k get po -o wide
NAME                    READY   STATUS    RESTARTS   AGE   IP           NODE           NOMINATED NODE   READINESS GATES
hello-f45cbcf6d-75gzb   1/1     Running   0          44m   10.244.1.4   minikube-m02   <none>           <none>
hello-f45cbcf6d-tsxv8   1/1     Running   0          48m   10.244.1.3   minikube-m02   <none>           <none>

matias@matias:~ $ k get svc
NAME         TYPE        CLUSTER-IP     EXTERNAL-IP   PORT(S)        AGE
hello        NodePort    10.96.61.243   <none>        80:31000/TCP   24m
kubernetes   ClusterIP   10.96.0.1      <none>        443/TCP        27m
matias@matias:~ $ k describe svc hello
Name:                     hello
Namespace:                default
Labels:                   <none>
Annotations:              <none>
Selector:                 app=hello
Type:                     NodePort
IP Families:              <none>
IP:                       10.96.61.243
IPs:                      10.96.61.243
Port:                     <unset>  80/TCP
TargetPort:               http/TCP
NodePort:                 <unset>  31000/TCP
Endpoints:                10.244.1.3:80,10.244.1.4:80
Session Affinity:         None
External Traffic Policy:  Cluster
Events:                   <none>

matias@matias:~ $ minikube ip
192.168.49.2
matias@matias:~ $ minikube service list
|-------------|------------|--------------|---------------------------|
|  NAMESPACE  |    NAME    | TARGET PORT  |            URL            |
|-------------|------------|--------------|---------------------------|
| default     | hello      |           80 | http://192.168.49.2:31000 |
| default     | kubernetes | No node port |
| kube-system | kube-dns   | No node port |
|-------------|------------|--------------|---------------------------|
matias@matias:~ $ 

matias@matias:~ $ for i in `seq 1 10`; do curl http://192.168.49.2:31000; echo; done
Hello from hello-f45cbcf6d-tsxv8 (10.244.1.3)
Hello from hello-f45cbcf6d-tsxv8 (10.244.1.3)
Hello from hello-f45cbcf6d-tsxv8 (10.244.1.3)
Hello from hello-f45cbcf6d-75gzb (10.244.1.4)
Hello from hello-f45cbcf6d-75gzb (10.244.1.4)
Hello from hello-f45cbcf6d-75gzb (10.244.1.4)
Hello from hello-f45cbcf6d-75gzb (10.244.1.4)
Hello from hello-f45cbcf6d-tsxv8 (10.244.1.3)
Hello from hello-f45cbcf6d-75gzb (10.244.1.4)
Hello from hello-f45cbcf6d-tsxv8 (10.244.1.3)

As you can see request is either attend from one pod & another.

Thanks for the support & sorry for the late respond!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
co/docker-driver Issues related to kubernetes in container co/multinode Issues related to multinode clusters kind/support Categorizes issue or PR as a support question.
Projects
None yet
Development

No branches or pull requests

6 participants