Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Question] How to change the default IP address of the minikube docker driver cluster? #12315

Closed
charleech opened this issue Aug 20, 2021 · 9 comments · Fixed by #13730
Closed
Assignees
Labels
area/networking networking issues co/kic-base help wanted Denotes an issue that needs help from a contributor. Must meet "help wanted" guidelines. kind/feature Categorizes issue or PR as related to a new feature. kind/support Categorizes issue or PR as a support question. lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. priority/backlog Higher priority than priority/awaiting-more-evidence.

Comments

@charleech
Copy link

charleech commented Aug 20, 2021

Regarding to the minikube: Proxies and VPNs which describes that the 192.168.49.0/24: Used by the minikube docker driver’s first cluster. At the moment it seems that conflict with my environment network mist and each node cannot communicate to others.

I also have a chance to create new minikube profile which also automatically creates the new docker network, e.g. 192.168.58.0/24, this newly profile works like a charms as each node is able to communicate with others.

How I can ensure is following the Using Multi-Node Clusters to test against each profile. (Refer to my testing at previous closed issue at #11669)

  • The first profile with 192.168.49.0/24 is failed.
  • The second profile with 192.168.58.0/24 is worked as expected.

I'm not sure if there is any chance to change it during starting up the minikube or not. Could you please help to advise?

Steps to reproduce the issue:

  1. Start the minikube
minikube start --driver=docker \
  --nodes 2 \
  --docker-opt bip=172.18.0.1/16 

# Note: The default 172.17.0.1/16 also conflict with 
# my environment network mist.
😄  minikube v1.22.0 on Centos 7.7.1908
    ▪ MINIKUBE_IN_STYLE=true
    ▪ MINIKUBE_HOME=/opt/minikube.home
✨  Using the docker driver based on user configuration
👍  Starting control plane node minikube in cluster minikube
🚜  Pulling base image ...
🔥  Creating docker container (CPUs=2, Memory=4000MB) ...
🐳  Preparing Kubernetes v1.21.2 on Docker 20.10.7 ...
    ▪ opt bip=172.18.0.1/16
    ▪ Generating certificates and keys ...
    ▪ Booting up control plane ...
    ▪ Configuring RBAC rules ...
🔗  Configuring CNI (Container Networking Interface) ...
🔎  Verifying Kubernetes components...
    ▪ Using image gcr.io/k8s-minikube/storage-provisioner:v5
🌟  Enabled addons: storage-provisioner, default-storageclass

👍  Starting node minikube-m02 in cluster minikube
🚜  Pulling base image ...
🔥  Creating docker container (CPUs=2, Memory=4000MB) ...
🌐  Found network options:
    ▪ NO_PROXY=192.168.49.2
🐳  Preparing Kubernetes v1.21.2 on Docker 20.10.7 ...
    ▪ opt bip=172.18.0.1/16
    ▪ env NO_PROXY=192.168.49.2
🔎  Verifying Kubernetes components...
🏄  Done! kubectl is now configured to use "minikube" cluster and "default" namespace by default
  1. Inspect the Docker network
docker network inspect minikube
"IPAM": {
    "Driver": "default",
    "Options": {},
    "Config": [
        {
            "Subnet": "192.168.49.0/24",
            "Gateway": "192.168.49.1"
        }
    ]
},

Full output of minikube logs command:
N/A

Full output of failed command:
N/A

@RA489
Copy link

RA489 commented Aug 25, 2021

/kind support

@k8s-ci-robot k8s-ci-robot added the kind/support Categorizes issue or PR as a support question. label Aug 25, 2021
@sharifelgamal
Copy link
Collaborator

So when we create the minikube guest using the docker driver, we explicitly create the corresponding docker network starting with a hardcoded subnet of 192.168.49.0 and incrementing until we find a free one. There isn't currently a way to pick that subnet, but I wouldn't be opposed to supporting that.

@sharifelgamal sharifelgamal added area/networking networking issues co/kic-base kind/feature Categorizes issue or PR as related to a new feature. help wanted Denotes an issue that needs help from a contributor. Must meet "help wanted" guidelines. priority/backlog Higher priority than priority/awaiting-more-evidence. and removed kind/support Categorizes issue or PR as a support question. labels Oct 6, 2021
@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue or PR as fresh with /remove-lifecycle stale
  • Mark this issue or PR as rotten with /lifecycle rotten
  • Close this issue or PR with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Jan 4, 2022
@TekTimmy
Copy link

TekTimmy commented Jan 5, 2022

as it happens the IPs are conflicting with the IPs from our company network we really would need to make this configurable

/remove-lifecycle stale

@davewongillies
Copy link

So when we create the minikube guest using the docker driver, we explicitly create the corresponding docker network starting with a hardcoded subnet of 192.168.49.0 and incrementing until we find a free one. There isn't currently a way to pick that subnet, but I wouldn't be opposed to supporting that.

I would dearly love to have the ability to chose a different subnet. With the security policy at my work, when connecting to the corporate VPN, we're given a small subnet for things like k8s and docker and without the ability to specify that particular subnet, I can't use minikube with the docker driver at all in its current state.

@rstaylor
Copy link

So when we create the minikube guest using the docker driver, we explicitly create the corresponding docker network starting with a hardcoded subnet of 192.168.49.0 and incrementing until we find a free one. There isn't currently a way to pick that subnet, but I wouldn't be opposed to supporting that.

I would dearly love to have the ability to chose a different subnet. With the security policy at my work, when connecting to the corporate VPN, we're given a small subnet for things like k8s and docker and without the ability to specify that particular subnet, I can't use minikube with the docker driver at all in its current state.

I have a similar issue with both my local and corporate vpn networks.

@presztak
Copy link
Member

I've started working on this.

/assign

@zhan9san
Copy link
Contributor

Hi @presztak @rstaylor @davewongillies @TekTimmy

As minikube runs inside docker, how about creating a network directly from docker side and run minikube under that created network?

  1. Create a docker bridge network
$ docker network create --driver=bridge --subnet=192.168.60.0/24 --gateway=192.168.60.1 minikube
2aff06a3652a0b75cee570f8d2985a6fac4ded27505947ff8394689cc3a6c782
  1. Start minikube under the network created in step 1
$ minikube start --driver=docker --nodes=2 --network minikube
😄  minikube v1.25.2 on Ubuntu 20.04
✨  Using the docker driver based on user configuration
👍  Starting control plane node minikube in cluster minikube
🚜  Pulling base image ...
🔥  Creating docker container (CPUs=2, Memory=2200MB) ...
🐳  Preparing Kubernetes v1.23.3 on Docker 20.10.12 ...
    ▪ kubelet.housekeeping-interval=5m
    ▪ kubelet.cni-conf-dir=/etc/cni/net.mk
❌  Unable to load cached images: loading cached images: stat /home/x/.minikube/cache/images/amd64/k8s.gcr.io/kube-scheduler_v1.23.3: no such file or directory
    ▪ Generating certificates and keys ...
    ▪ Booting up control plane ...
    ▪ Configuring RBAC rules ...
🔗  Configuring CNI (Container Networking Interface) ...
🔎  Verifying Kubernetes components...
    ▪ Using image gcr.io/k8s-minikube/storage-provisioner:v5
🌟  Enabled addons: storage-provisioner, default-storageclass

👍  Starting worker node minikube-m02 in cluster minikube
🚜  Pulling base image ...
🔥  Creating docker container (CPUs=2, Memory=2200MB) ...
🌐  Found network options:
    ▪ NO_PROXY=192.168.60.2
🐳  Preparing Kubernetes v1.23.3 on Docker 20.10.12 ...
    ▪ env NO_PROXY=192.168.60.2
🔎  Verifying Kubernetes components...
🏄  Done! kubectl is now configured to use "minikube" cluster and "default" namespace by default
  1. Verify the minikube ip
$ minikube ip
192.168.60.2
$ kubectl get node -o wide
NAME           STATUS   ROLES                  AGE     VERSION   INTERNAL-IP    EXTERNAL-IP   OS-IMAGE             KERNEL-VERSION     CONTAINER-RUNTIME
minikube       Ready    control-plane,master   4m33s   v1.23.3   192.168.60.2   <none>        Ubuntu 20.04.2 LTS   5.4.0-90-generic   docker://20.10.12
minikube-m02   Ready    <none>                 3m43s   v1.23.3   192.168.60.3   <none>        Ubuntu 20.04.2 LTS   5.4.0-90-generic   docker://20.10.12

BTW, if you have you have issue in #13729, this PR, #13766 may fix it.

@Forchapeatl
Copy link

/kind support

@k8s-ci-robot k8s-ci-robot added the kind/support Categorizes issue or PR as a support question. label May 8, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
area/networking networking issues co/kic-base help wanted Denotes an issue that needs help from a contributor. Must meet "help wanted" guidelines. kind/feature Categorizes issue or PR as related to a new feature. kind/support Categorizes issue or PR as a support question. lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. priority/backlog Higher priority than priority/awaiting-more-evidence.
Projects
None yet
Development

Successfully merging a pull request may close this issue.