Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

wip: new flag --static-ip to allow random IP #13050

Closed
wants to merge 1 commit into from

Conversation

medyagh
Copy link
Member

@medyagh medyagh commented Nov 29, 2021

trying to help a user on slack who wanted to have two minikube clusters on the same docker network

Chris Tomkins:bird: Today at 9:56 AM
I have a need (for lab purposes only) to create two Minikube clusters on the same Docker bridge, e.g.:
minikube -p cluster-a start --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.200.0.0/16 --service-cluster-ip-range=10.201.0.0/16 --network cttest
minikube -p cluster-b start --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.210.0.0/16 --service-cluster-ip-range=10.211.0.0/16 --network cttest```

example usage:

$ mk start --static-ip=false
😄  minikube v1.24.0 on Darwin 12.0.1 (arm64)
✨  Automatically selected the docker driver
👍  Starting control plane node minikube in cluster minikube
🚜  Pulling base image ...
💾  Downloading Kubernetes v1.22.4 preload ...
    > preloaded-images-k8s-v14-v1...: 252.09 MiB / 252.09 MiB  100.00% 29.13 Mi
🔥  Creating docker container (CPUs=2, Memory=1988MB) ...
🐳  Preparing Kubernetes v1.22.4 on Docker 20.10.8 ...
❌  Unable to load cached images: loading cached images: stat /Users/medya/.minikube/cache/images/k8s.gcr.io/kube-proxy_v1.22.4: no such file or directory
    ▪ Generating certificates and keys ...
    ▪ Booting up control plane ...
    ▪ Configuring RBAC rules ...
🔎  Verifying Kubernetes components...
    ▪ Using image gcr.io/k8s-minikube/storage-provisioner:v5
🌟  Enabled addons: storage-provisioner, default-storageclass
🏄  Done! kubectl is now configured to use "minikube" cluster and "default" namespace by default

@k8s-ci-robot k8s-ci-robot added the size/S Denotes a PR that changes 10-29 lines, ignoring generated files. label Nov 29, 2021
@k8s-ci-robot
Copy link
Contributor

[APPROVALNOTIFIER] This PR is APPROVED

This pull-request has been approved by: medyagh

The full list of commands accepted by this bot can be found here.

The pull request process is described here

Needs approval from an approver in each of these files:

Approvers can indicate their approval by writing /approve in a comment
Approvers can cancel approval by writing /approve cancel in a comment

@k8s-ci-robot k8s-ci-robot added the approved Indicates a PR has been approved by an approver from all required OWNERS files. label Nov 29, 2021
@medyagh
Copy link
Member Author

medyagh commented Nov 29, 2021

here is the link to the binary from a PR that I think might fix this issue, do you mind trying it out ?

https://storage.googleapis.com/minikube-builds/13050/minikube-linux-amd64
https://storage.googleapis.com/minikube-builds/13050/minikube-darwin-amd64
https://storage.googleapis.com/minikube-builds/13050/minikube-windows-amd64.exe

If you could please try delete first then setting static-ip to false

./minikube-darwin-amd64 delete --all
./minikube-darwin-amd64 start--static-ip=false --netwrok=cttest

@sharifelgamal
Copy link
Collaborator

/ok-to-test

@k8s-ci-robot k8s-ci-robot added the ok-to-test Indicates a non-member PR verified by an org member that is safe to test. label Nov 29, 2021
@minikube-pr-bot
Copy link

kvm2 driver with docker runtime

+----------------+----------+---------------------+
|    COMMAND     | MINIKUBE | MINIKUBE (PR 13050) |
+----------------+----------+---------------------+
| minikube start | 46.6s    | 46.0s               |
| enable ingress | 30.6s    | 30.6s               |
+----------------+----------+---------------------+

Times for minikube start: 48.1s 45.7s 45.6s 45.9s 47.8s
Times for minikube (PR 13050) start: 48.1s 45.8s 46.4s 45.2s 44.8s

Times for minikube ingress: 31.7s 30.7s 30.2s 29.9s 30.3s
Times for minikube (PR 13050) ingress: 30.7s 30.8s 31.8s 30.2s 29.7s

docker driver with docker runtime

+----------------+----------+---------------------+
|    COMMAND     | MINIKUBE | MINIKUBE (PR 13050) |
+----------------+----------+---------------------+
| minikube start | 20.8s    | 21.3s               |
| enable ingress | 30.6s    | 30.3s               |
+----------------+----------+---------------------+

Times for minikube start: 19.7s 20.2s 21.5s 21.6s 21.0s
Times for minikube (PR 13050) start: 20.9s 20.5s 21.3s 21.3s 22.6s

Times for minikube (PR 13050) ingress: 27.9s 27.9s 26.9s 33.9s 34.9s
Times for minikube ingress: 25.9s 34.9s 35.4s 28.4s 28.4s

docker driver with containerd runtime

+----------------+----------+---------------------+
|    COMMAND     | MINIKUBE | MINIKUBE (PR 13050) |
+----------------+----------+---------------------+
| minikube start | 41.0s    | 40.7s               |
| enable ingress | 28.2s    | 25.9s               |
+----------------+----------+---------------------+

Times for minikube start: 41.1s 40.2s 42.1s 40.9s 40.8s
Times for minikube (PR 13050) start: 41.2s 40.5s 40.5s 41.4s 39.7s

Times for minikube ingress: 17.9s 21.9s 36.4s 32.4s 32.4s
Times for minikube (PR 13050) ingress: 32.4s 28.4s 18.4s 32.4s 17.9s

@minikube-pr-bot
Copy link

These are the flake rates of all failed tests.

Environment Failed Tests Flake Rate (%)
Docker_Linux_containerd TestMultiNode/serial/RestartKeepsNodes (gopogh) 0.00 (chart)
Docker_Linux TestMultiNode/serial/RestartKeepsNodes (gopogh) 0.00 (chart)
Docker_macOS TestMultiNode/serial/RestartKeepsNodes (gopogh) 0.00 (chart)
Hyperkit_macOS TestAddons/parallel/Registry (gopogh) 0.00 (chart)
Hyperkit_macOS TestFunctional/parallel/MountCmd/any-port (gopogh) 0.00 (chart)
Hyperkit_macOS TestAddons/serial/GCPAuth (gopogh) 0.80 (chart)
Hyperkit_macOS TestAddons/parallel/CSI (gopogh) 1.60 (chart)
Hyperkit_macOS TestFunctional/parallel/TunnelCmd/serial/AccessDirect (gopogh) 2.40 (chart)
Hyperkit_macOS TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (gopogh) 2.40 (chart)
Hyperkit_macOS TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (gopogh) 2.40 (chart)
Hyperkit_macOS TestFunctional/parallel/PersistentVolumeClaim (gopogh) 5.60 (chart)
Hyperkit_macOS TestIngressAddonLegacy/serial/ValidateIngressAddonActivation (gopogh) 6.40 (chart)
Hyperkit_macOS TestFunctional/parallel/ImageCommands/ImageBuild (gopogh) 8.00 (chart)
Docker_macOS TestStartStop/group/default-k8s-different-port/serial/SecondStart (gopogh) 28.92 (chart)
Docker_Linux TestFunctional/serial/ComponentHealth (gopogh) 28.93 (chart)
Docker_macOS TestFunctional/serial/ComponentHealth (gopogh) 36.50 (chart)
Docker_macOS TestFunctional/serial/ExtraConfig (gopogh) 36.50 (chart)
Hyper-V_Windows TestMultiNode/serial/CopyFile (gopogh) 46.97 (chart)
Docker_macOS TestNetworkPlugins/group/kubenet/DNS (gopogh) 72.16 (chart)
Docker_macOS TestNetworkPlugins/group/kindnet/Start (gopogh) 73.64 (chart)
Docker_macOS TestNetworkPlugins/group/bridge/DNS (gopogh) 73.79 (chart)
Docker_macOS TestNetworkPlugins/group/enable-default-cni/DNS (gopogh) 75.45 (chart)
Docker_macOS TestNetworkPlugins/group/calico/Start (gopogh) 78.29 (chart)
Hyper-V_Windows TestNoKubernetes/serial/StartNoArgs (gopogh) 92.94 (chart)
Docker_macOS TestDownloadOnly/v1.16.0/preload-exists (gopogh) 100.00 (chart)
Hyper-V_Windows TestMultiNode/serial/PingHostFrom2Pods (gopogh) 100.00 (chart)
Hyper-V_Windows TestMultiNode/serial/RestartKeepsNodes (gopogh) 100.00 (chart)
Hyper-V_Windows TestRunningBinaryUpgrade (gopogh) 100.00 (chart)
Hyper-V_Windows TestStoppedBinaryUpgrade/Upgrade (gopogh) 100.00 (chart)

To see the flake rates of all tests by environment, click here.

@medyagh medyagh changed the title new flag --static-ip to allow random IP wip: new flag --static-ip to allow random IP Nov 30, 2021
@k8s-ci-robot k8s-ci-robot added the do-not-merge/work-in-progress Indicates that a PR should not merge because it is a work in progress. label Nov 30, 2021
@k8s-triage-robot
Copy link

Unknown CLA label state. Rechecking for CLA labels.

Send feedback to sig-contributor-experience at kubernetes/community.

/check-cla

@k8s-ci-robot k8s-ci-robot added the cncf-cla: yes Indicates the PR's author has signed the CNCF CLA. label Nov 30, 2021
@cdtomkins
Copy link
Contributor

cdtomkins commented Nov 30, 2021

That seems to work perfectly! Thanks. I will now continue on to build the lab scenario and let you know if I encounter anything else:

chris @ chris-work ~/2021_11/cluster_peer_demo 
└─526─▶ kubectl --cluster=cluster-a get nodes -o wide
NAME            STATUS   ROLES                  AGE     VERSION   INTERNAL-IP   EXTERNAL-IP   OS-IMAGE             KERNEL-VERSION      CONTAINER-RUNTIME
cluster-a       Ready    control-plane,master   7m50s   v1.22.4   172.17.0.3    <none>        Ubuntu 20.04.2 LTS   5.11.0-40-generic   docker://20.10.8
cluster-a-m02   Ready    <none>                 5m7s    v1.22.4   172.17.0.5    <none>        Ubuntu 20.04.2 LTS   5.11.0-40-generic   docker://20.10.8
cluster-a-m03   Ready    <none>                 4m52s   v1.22.4   172.17.0.6    <none>        Ubuntu 20.04.2 LTS   5.11.0-40-generic   docker://20.10.8
cluster-a-m04   Ready    <none>                 4m37s   v1.22.4   172.17.0.7    <none>        Ubuntu 20.04.2 LTS   5.11.0-40-generic   docker://20.10.8
cluster-a-m05   Ready    <none>                 30s     v1.22.4   172.17.0.12   <none>        Ubuntu 20.04.2 LTS   5.11.0-40-generic   docker://20.10.8
chris @ chris-work ~/2021_11/cluster_peer_demo 
└─527─▶ kubectl --cluster=cluster-b get nodes -o wide
NAME            STATUS   ROLES                  AGE     VERSION   INTERNAL-IP   EXTERNAL-IP   OS-IMAGE             KERNEL-VERSION      CONTAINER-RUNTIME
cluster-b       Ready    control-plane,master   6m7s    v1.22.4   172.17.0.4    <none>        Ubuntu 20.04.2 LTS   5.11.0-40-generic   docker://20.10.8
cluster-b-m02   Ready    <none>                 3m35s   v1.22.4   172.17.0.8    <none>        Ubuntu 20.04.2 LTS   5.11.0-40-generic   docker://20.10.8
cluster-b-m03   Ready    <none>                 2m50s   v1.22.4   172.17.0.9    <none>        Ubuntu 20.04.2 LTS   5.11.0-40-generic   docker://20.10.8
cluster-b-m04   Ready    <none>                 2m22s   v1.22.4   172.17.0.10   <none>        Ubuntu 20.04.2 LTS   5.11.0-40-generic   docker://20.10.8
cluster-b-m05   Ready    <none>                 52s     v1.22.4   172.17.0.11   <none>        Ubuntu 20.04.2 LTS   5.11.0-40-generic   docker://20.10.8

@cdtomkins
Copy link
Contributor

cdtomkins commented Nov 30, 2021

This is great but I think I have found an issue.

I created two clusters like this:

minikube -p cluster-a start --network-plugin=cni --static-ip=false --extra-config=kubeadm.pod-network-cidr=10.200.0.0/16 --service-cluster-ip-range=10.201.0.0/16 --network=sharedlayer2
minikube -p cluster-b start --network-plugin=cni --static-ip=false --extra-config=kubeadm.pod-network-cidr=10.210.0.0/16 --service-cluster-ip-range=10.211.0.0/16 --network=sharedlayer2

A "sharedlayer2" docker bridge was created as expected, but it wasn't actually used (ignore the "NotReady" state, this is unrelated):

chris @ chris-work ~/2021_11/cluster_peer_demo 
└─645─▶ kubectl --cluster=cluster-b get nodes -o wide
NAME            STATUS     ROLES                  AGE     VERSION   INTERNAL-IP   EXTERNAL-IP   OS-IMAGE             KERNEL-VERSION      CONTAINER-RUNTIME
cluster-b       Ready      control-plane,master   2m28s   v1.22.4   172.17.0.7    <none>        Ubuntu 20.04.2 LTS   5.11.0-40-generic   docker://20.10.8
cluster-b-m02   Ready      <none>                 2m10s   v1.22.4   172.17.0.8    <none>        Ubuntu 20.04.2 LTS   5.11.0-40-generic   docker://20.10.8
cluster-b-m03   NotReady   <none>                 116s    v1.22.4   172.17.0.9    <none>        Ubuntu 20.04.2 LTS   5.11.0-40-generic   docker://20.10.8
cluster-b-m04   NotReady   <none>                 103s    v1.22.4   172.17.0.10   <none>        Ubuntu 20.04.2 LTS   5.11.0-40-generic   docker://20.10.8
chris @ chris-work ~/2021_11/cluster_peer_demo 
└─646─▶ kubectl --cluster=cluster-a get nodes -o wide 
NAME            STATUS   ROLES                  AGE   VERSION   INTERNAL-IP   EXTERNAL-IP   OS-IMAGE             KERNEL-VERSION      CONTAINER-RUNTIME
cluster-a       Ready    control-plane,master   13m   v1.22.4   172.17.0.3    <none>        Ubuntu 20.04.2 LTS   5.11.0-40-generic   docker://20.10.8
cluster-a-m02   Ready    <none>                 13m   v1.22.4   172.17.0.4    <none>        Ubuntu 20.04.2 LTS   5.11.0-40-generic   docker://20.10.8
cluster-a-m03   Ready    <none>                 13m   v1.22.4   172.17.0.5    <none>        Ubuntu 20.04.2 LTS   5.11.0-40-generic   docker://20.10.8
cluster-a-m04   Ready    <none>                 12m   v1.22.4   172.17.0.6    <none>        Ubuntu 20.04.2 LTS   5.11.0-40-generic   docker://20.10.8
chris @ chris-work ~/2021_11/cluster_peer_demo 
└─647─▶ docker network ls
NETWORK ID     NAME           DRIVER    SCOPE
b3e6611f0f48   bridge         bridge    local
8bc049c4e960   host           host      local
477a1639b48e   none           null      local
9ecc3b9b22ea   sharedlayer2   bridge    local
chris @ chris-work ~/2021_11/cluster_peer_demo 
└─648─▶ route -n
Kernel IP routing table
Destination     Gateway         Genmask         Flags Metric Ref    Use Iface
0.0.0.0         10.11.10.1      0.0.0.0         UG    600    0        0 wlp0s20f3
10.11.10.0      0.0.0.0         255.255.255.0   U     600    0        0 wlp0s20f3
10.230.34.0     0.0.0.0         255.255.255.0   U     0      0        0 mpqemubr0
169.254.0.0     0.0.0.0         255.255.0.0     U     1000   0        0 wlp0s20f3
172.17.0.0      0.0.0.0         255.255.0.0     U     0      0        0 docker0
192.168.49.0    0.0.0.0         255.255.255.0   U     0      0        0 br-9ecc3b9b22ea

@medyagh
Copy link
Member Author

medyagh commented Dec 7, 2021

thank you for ur patience on this repsonse.

minikube -p cluster-a start --network-plugin=cni --static-ip=false --extra-config=kubeadm.pod-network-cidr=10.200.0.0/16 --service-cluster-ip-range=10.201.0.0/16 --network=sharedlayer2

@cdtomkins I am curious are u applying your own CNI ?

❗  With --network-plugin=cni, you will need to provide your own CNI. See --cni flag as a user-friendly alternative

you could choose one of minikube cnis for example minikube start--cni="calico"

and also is there a reason you want two different clusters use same cidr for pods? that could collide?

--extra-config=kubeadm.pod-network-cidr=10.200.0.0/16 --service-cluster-ip-range=10.201.0.0/16 

in your output I see multi nodes but ur minikube start command does not have flags for creating multi nodes...

have u tried on a fresh start ( delete --all) and then try two single nodes share the same network ?

@medyagh
Copy link
Member Author

medyagh commented Dec 7, 2021

@cdtomkins

for me for single node it seems to be working


14:33:00 medya/workspace/minikube
static_ip2 ✓
$ minikube -p cluster-a start --network-plugin=cni --static-ip=false --extra-config=kubeadm.pod-network-cidr=10.200.0.0/16 --service-cluster-ip-range=10.201.0.0/16 --network=sharedlayer2
Error: unknown flag: --static-ip
See 'minikube start --help' for usage.
14:33:16 medya/workspace/minikube
static_ip2 ✓
$ mk -p cluster-a start --network-plugin=cni --static-ip=false --extra-config=kubeadm.pod-network-cidr=10.200.0.0/16 --service-cluster-ip-range=10.201.0.0/16 --network=sharedlayer2
😄  [cluster-a] minikube v1.24.0 on Darwin 12.0.1 (arm64)
✨  Automatically selected the docker driver
❗  With --network-plugin=cni, you will need to provide your own CNI. See --cni flag as a user-friendly alternative
👍  Starting control plane node cluster-a in cluster cluster-a
🚜  Pulling base image ...
🔥  Creating docker container (CPUs=2, Memory=1988MB) ...
🐳  Preparing Kubernetes v1.22.4 on Docker 20.10.8 ...
    ▪ kubeadm.pod-network-cidr=10.200.0.0/16
❌  Unable to load cached images: loading cached images: stat /Users/medya/.minikube/cache/images/k8s.gcr.io/coredns/coredns_v1.8.4: no such file or directory
    ▪ Generating certificates and keys ...
    ▪ Booting up control plane ...
    ▪ Configuring RBAC rules ...
🔎  Verifying Kubernetes components...
    ▪ Using image gcr.io/k8s-minikube/storage-provisioner:v5
🌟  Enabled addons: storage-provisioner, default-storageclass
🏄  Done! kubectl is now configured to use "cluster-a" cluster and "default" namespace by default
14:36:39 medya/workspace/minikube
static_ip2 ✓
$ kc get pods -A
NAMESPACE     NAME                                READY   STATUS              RESTARTS   AGE
kube-system   coredns-78fcd69978-mv568            0/1     ContainerCreating   0          33s
kube-system   etcd-cluster-a                      1/1     Running             0          43s
kube-system   kube-apiserver-cluster-a            1/1     Running             0          43s
kube-system   kube-controller-manager-cluster-a   1/1     Running             0          43s
kube-system   kube-proxy-ktf87                    1/1     Running             0          33s
kube-system   kube-scheduler-cluster-a            1/1     Running             0          46s
kube-system   storage-provisioner                 1/1     Running             0          40s
14:37:18 medya/workspace/minikube
static_ip2 ✓
$ docker netwrok ls
docker: 'netwrok' is not a docker command.
See 'docker --help'
14:37:32 medya/workspace/minikube
static_ip2 ✓
$ docker network ls
NETWORK ID     NAME           DRIVER    SCOPE
cfd291543106   bridge         bridge    local
a4cf62200f82   host           host      local
28c81f724c0f   none           null      local
1683a6526772   sharedlayer2   bridge    local
14:37:36 medya/workspace/minikube
static_ip2 ✓
$ minikube -p cluster-b start --network-plugin=cni --static-ip=false --extra-config=kubeadm.pod-network-cidr=10.210.0.0/16 --service-cluster-ip-range=10.211.0.0/16 --network=sharedlayer2
Error: unknown flag: --static-ip
See 'minikube start --help' for usage.
14:37:38 medya/workspace/minikube
static_ip2 ✓
$ mk -p cluster-b start --network-plugin=cni --static-ip=false --extra-config=kubeadm.pod-network-cidr=10.210.0.0/16 --service-cluster-ip-range=10.211.0.0/16 --network=sharedlayer2
😄  [cluster-b] minikube v1.24.0 on Darwin 12.0.1 (arm64)
✨  Automatically selected the docker driver
❗  With --network-plugin=cni, you will need to provide your own CNI. See --cni flag as a user-friendly alternative
👍  Starting control plane node cluster-b in cluster cluster-b
🚜  Pulling base image ...
🔥  Creating docker container (CPUs=2, Memory=1988MB) ...
🐳  Preparing Kubernetes v1.22.4 on Docker 20.10.8 ...
    ▪ kubeadm.pod-network-cidr=10.210.0.0/16
❌  Unable to load cached images: loading cached images: stat /Users/medya/.minikube/cache/images/k8s.gcr.io/pause_3.5: no such file or directory
    ▪ Generating certificates and keys ...
    ▪ Booting up control plane ...
    ▪ Configuring RBAC rules ...
🔎  Verifying Kubernetes components...
    ▪ Using image gcr.io/k8s-minikube/storage-provisioner:v5
🌟  Enabled addons: storage-provisioner, default-storageclass
🏄  Done! kubectl is now configured to use "cluster-b" cluster and "default" namespace by default
14:40:36 medya/workspace/minikube
static_ip2 ✓
$ kc get pods A
Error from server (NotFound): pods "A" not found
14:41:02 medya/workspace/minikube
static_ip2 ✓
$ kc get pods -A
NAMESPACE     NAME                                READY   STATUS              RESTARTS   AGE
kube-system   coredns-78fcd69978-dwp56            0/1     ContainerCreating   0          19s
kube-system   etcd-cluster-b                      1/1     Running             0          31s
kube-system   kube-apiserver-cluster-b            1/1     Running             0          31s
kube-system   kube-controller-manager-cluster-b   1/1     Running             0          31s
kube-system   kube-proxy-7l2ld                    1/1     Running             0          19s
kube-system   kube-scheduler-cluster-b            1/1     Running             0          31s
kube-system   storage-provisioner                 1/1     Running             0          29s
14:41:04 medya/workspace/minikube
static_ip2 ✓
$ mk profile list
|-----------|-----------|---------|------------|------|---------|---------|-------|
|  Profile  | VM Driver | Runtime |     IP     | Port | Version | Status  | Nodes |
|-----------|-----------|---------|------------|------|---------|---------|-------|
| cluster-a | docker    | docker  | 172.17.0.3 | 8443 | v1.22.4 | Running |     1 |
| cluster-b | docker    | docker  | 172.17.0.4 | 8443 | v1.22.4 | Running |     1 |
|-----------|-----------|---------|------------|------|---------|---------|-------|
14:41:11 medya/workspace/minikube
static_ip2 ✓
$ docker network ls
NETWORK ID     NAME           DRIVER    SCOPE
cfd291543106   bridge         bridge    local
a4cf62200f82   host           host      local
28c81f724c0f   none           null      local
1683a6526772   sharedlayer2   bridge    local

@cdtomkins
Copy link
Contributor

@cdtomkins I am curious are u applying your own CNI ?

Yes, I work for the Project Calico team, for my use-case I need to apply the CNI myself - so I don't specify the CNI at minikube start.

and also is there a reason you want two different clusters use same cidr for pods? that could collide?

Actually if you look again they are using different pod and service ranges.

in your output I see multi nodes but ur minikube start command does not have flags for creating multi nodes...

Yes, I added the extra nodes afterwards using minikube node add

have u tried on a fresh start ( delete --all) and then try two single nodes share the same network ?

Yes, I did it again for you below, you can see that everything works fine but the nodes have been created on the docker bridge, not on the calico_cluster_peer_demo bridge.

I hope this helps.

chris @ chris-work ~ 
└─505─▶ minikube delete --all && rm -rf ~/.minikube && rm -rf ~/.kube
🔥  Successfully deleted all profiles

chris @ chris-work ~ 
└─506─▶ minikube -p cluster-a start --network-plugin=cni --static-ip=false --extra-config=kubeadm.pod-network-cidr=10.200.0.0/16 --service-cluster-ip-range=10.201.0.0/16 --network=calico_cluster_peer_demo
😄  [cluster-a] minikube v1.24.0 on Ubuntu 20.04
    ▪ KUBECONFIG=/home/chris/.kube/config
✨  Automatically selected the docker driver. Other choices: none, ssh
❗  With --network-plugin=cni, you will need to provide your own CNI. See --cni flag as a user-friendly alternative
👍  Starting control plane node cluster-a in cluster cluster-a
🚜  Pulling base image ...
💾  Downloading Kubernetes v1.22.4 preload ...
    > preloaded-images-k8s-v14-v1...: 501.79 MiB / 501.79 MiB  100.00% 7.88 MiB
🔥  Creating docker container (CPUs=2, Memory=7900MB) ...
🐳  Preparing Kubernetes v1.22.4 on Docker 20.10.8 ...
    ▪ kubeadm.pod-network-cidr=10.200.0.0/16
    ▪ Generating certificates and keys ...
    ▪ Booting up control plane ...
    ▪ Configuring RBAC rules ...
🔎  Verifying Kubernetes components...
    ▪ Using image gcr.io/k8s-minikube/storage-provisioner:v5
🌟  Enabled addons: storage-provisioner, default-storageclass
🏄  Done! kubectl is now configured to use "cluster-a" cluster and "default" namespace by default

chris @ chris-work ~ 
└─507─▶ minikube -p cluster-b start --network-plugin=cni --static-ip=false --extra-config=kubeadm.pod-network-cidr=10.210.0.0/16 --service-cluster-ip-range=10.211.0.0/16 --network=calico_cluster_peer_demo
😄  [cluster-b] minikube v1.24.0 on Ubuntu 20.04
    ▪ KUBECONFIG=/home/chris/.kube/config
✨  Automatically selected the docker driver. Other choices: ssh, none
❗  With --network-plugin=cni, you will need to provide your own CNI. See --cni flag as a user-friendly alternative
👍  Starting control plane node cluster-b in cluster cluster-b
🚜  Pulling base image ...
🔥  Creating docker container (CPUs=2, Memory=7900MB) ...
🐳  Preparing Kubernetes v1.22.4 on Docker 20.10.8 ...
    ▪ kubeadm.pod-network-cidr=10.210.0.0/16
    ▪ Generating certificates and keys ...
    ▪ Booting up control plane ...
    ▪ Configuring RBAC rules ...
🔎  Verifying Kubernetes components...
    ▪ Using image gcr.io/k8s-minikube/storage-provisioner:v5
🌟  Enabled addons: storage-provisioner, default-storageclass
🏄  Done! kubectl is now configured to use "cluster-b" cluster and "default" namespace by default

chris @ chris-work ~ 
└─508─▶ kubectl get nodes -o wide
NAME        STATUS   ROLES                  AGE   VERSION   INTERNAL-IP   EXTERNAL-IP   OS-IMAGE             KERNEL-VERSION      CONTAINER-RUNTIME
cluster-b   Ready    control-plane,master   69s   v1.22.4   172.17.0.4    <none>        Ubuntu 20.04.2 LTS   5.11.0-41-generic   docker://20.10.8

chris @ chris-work ~ 
└─511─▶ kubectl --cluster=cluster-a get nodes -o wide
NAME        STATUS   ROLES                  AGE     VERSION   INTERNAL-IP   EXTERNAL-IP   OS-IMAGE             KERNEL-VERSION      CONTAINER-RUNTIME
cluster-a   Ready    control-plane,master   3m10s   v1.22.4   172.17.0.3    <none>        Ubuntu 20.04.2 LTS   5.11.0-41-generic   docker://20.10.8

chris @ chris-work ~ 
└─512─▶ route -n
Kernel IP routing table
Destination     Gateway         Genmask         Flags Metric Ref    Use Iface
0.0.0.0         10.11.10.1      0.0.0.0         UG    600    0        0 wlp0s20f3
10.11.10.0      0.0.0.0         255.255.255.0   U     600    0        0 wlp0s20f3
10.230.34.0     0.0.0.0         255.255.255.0   U     0      0        0 mpqemubr0
169.254.0.0     0.0.0.0         255.255.0.0     U     1000   0        0 mpqemubr0
172.17.0.0      0.0.0.0         255.255.0.0     U     0      0        0 docker0
192.168.49.0    0.0.0.0         255.255.255.0   U     0      0        0 br-1f2b8596d61b

chris @ chris-work ~ 
└─514─▶ docker network ls
NETWORK ID     NAME                       DRIVER    SCOPE
c58086b95b57   bridge                     bridge    local
1f2b8596d61b   calico_cluster_peer_demo   bridge    local
8bc049c4e960   host                       host      local
477a1639b48e   none                       null      local

@k8s-ci-robot
Copy link
Contributor

@medyagh: PR needs rebase.

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

@k8s-ci-robot k8s-ci-robot added the needs-rebase Indicates a PR cannot be merged because it has merge conflicts with HEAD. label Dec 11, 2021
@cdtomkins
Copy link
Contributor

Any further thoughts on merging this one? I will mention it in my FOSDEM talk early next month.

@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue or PR as fresh with /remove-lifecycle stale
  • Mark this issue or PR as rotten with /lifecycle rotten
  • Close this issue or PR with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Apr 13, 2022
@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue or PR as fresh with /remove-lifecycle rotten
  • Close this issue or PR with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle rotten

@k8s-ci-robot k8s-ci-robot added lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. and removed lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. labels May 13, 2022
@sharifelgamal sharifelgamal removed the lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. label May 18, 2022
@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue or PR as fresh with /remove-lifecycle stale
  • Mark this issue or PR as rotten with /lifecycle rotten
  • Close this issue or PR with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Aug 16, 2022
@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue or PR as fresh with /remove-lifecycle rotten
  • Close this issue or PR with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle rotten

@k8s-ci-robot k8s-ci-robot added lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. and removed lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. labels Sep 15, 2022
@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the PR is closed

You can:

  • Reopen this PR with /reopen
  • Mark this PR as fresh with /remove-lifecycle rotten
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/close

@k8s-ci-robot
Copy link
Contributor

@k8s-triage-robot: Closed this PR.

In response to this:

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the PR is closed

You can:

  • Reopen this PR with /reopen
  • Mark this PR as fresh with /remove-lifecycle rotten
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/close

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

@medyagh
Copy link
Member Author

medyagh commented Jan 11, 2023

anyone followed this PR, there is a another PR that replaced this

#15553

@cdtomkins that will be included in the next minikube release and available on the minikube head

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
approved Indicates a PR has been approved by an approver from all required OWNERS files. cncf-cla: yes Indicates the PR's author has signed the CNCF CLA. do-not-merge/work-in-progress Indicates that a PR should not merge because it is a work in progress. lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. needs-rebase Indicates a PR cannot be merged because it has merge conflicts with HEAD. ok-to-test Indicates a non-member PR verified by an org member that is safe to test. size/S Denotes a PR that changes 10-29 lines, ignoring generated files.
Projects
None yet
Development

Successfully merging this pull request may close these issues.

6 participants