-
Notifications
You must be signed in to change notification settings - Fork 4.9k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
wip: new flag --static-ip to allow random IP #13050
Conversation
[APPROVALNOTIFIER] This PR is APPROVED This pull-request has been approved by: medyagh The full list of commands accepted by this bot can be found here. The pull request process is described here
Needs approval from an approver in each of these files:
Approvers can indicate their approval by writing |
here is the link to the binary from a PR that I think might fix this issue, do you mind trying it out ?
If you could please try delete first then setting static-ip to false
|
/ok-to-test |
kvm2 driver with docker runtime
Times for minikube start: 48.1s 45.7s 45.6s 45.9s 47.8s Times for minikube ingress: 31.7s 30.7s 30.2s 29.9s 30.3s docker driver with docker runtime
Times for minikube start: 19.7s 20.2s 21.5s 21.6s 21.0s Times for minikube (PR 13050) ingress: 27.9s 27.9s 26.9s 33.9s 34.9s docker driver with containerd runtime
Times for minikube start: 41.1s 40.2s 42.1s 40.9s 40.8s Times for minikube ingress: 17.9s 21.9s 36.4s 32.4s 32.4s |
These are the flake rates of all failed tests.
To see the flake rates of all tests by environment, click here. |
Unknown CLA label state. Rechecking for CLA labels. Send feedback to sig-contributor-experience at kubernetes/community. /check-cla |
That seems to work perfectly! Thanks. I will now continue on to build the lab scenario and let you know if I encounter anything else:
|
This is great but I think I have found an issue. I created two clusters like this:
A "sharedlayer2" docker bridge was created as expected, but it wasn't actually used (ignore the "NotReady" state, this is unrelated):
|
thank you for ur patience on this repsonse.
@cdtomkins I am curious are u applying your own CNI ?
you could choose one of minikube cnis for example and also is there a reason you want two different clusters use same cidr for pods? that could collide?
in your output I see multi nodes but ur minikube start command does not have flags for creating multi nodes... have u tried on a fresh start ( delete --all) and then try two single nodes share the same network ? |
for me for single node it seems to be working
|
Yes, I work for the Project Calico team, for my use-case I need to apply the CNI myself - so I don't specify the CNI at
Actually if you look again they are using different pod and service ranges.
Yes, I added the extra nodes afterwards using
Yes, I did it again for you below, you can see that everything works fine but the nodes have been created on the docker bridge, not on the calico_cluster_peer_demo bridge. I hope this helps.
|
@medyagh: PR needs rebase. Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
Any further thoughts on merging this one? I will mention it in my FOSDEM talk early next month. |
The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs. This bot triages issues and PRs according to the following rules:
You can:
Please send feedback to sig-contributor-experience at kubernetes/community. /lifecycle stale |
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs. This bot triages issues and PRs according to the following rules:
You can:
Please send feedback to sig-contributor-experience at kubernetes/community. /lifecycle rotten |
The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs. This bot triages issues and PRs according to the following rules:
You can:
Please send feedback to sig-contributor-experience at kubernetes/community. /lifecycle stale |
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs. This bot triages issues and PRs according to the following rules:
You can:
Please send feedback to sig-contributor-experience at kubernetes/community. /lifecycle rotten |
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs. This bot triages PRs according to the following rules:
You can:
Please send feedback to sig-contributor-experience at kubernetes/community. /close |
@k8s-triage-robot: Closed this PR. In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
anyone followed this PR, there is a another PR that replaced this @cdtomkins that will be included in the next minikube release and available on the minikube head |
trying to help a user on slack who wanted to have two minikube clusters on the same docker network
example usage: