-
Notifications
You must be signed in to change notification settings - Fork 4.9k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
remove podman cni config when custom cni is used #10384
remove podman cni config when custom cni is used #10384
Conversation
Hi @michaelhenkel. Thanks for your PR. I'm waiting for a kubernetes member to verify that this patch is reasonable to test. If it is, they should reply with Once the patch is verified, the new status will be reflected by the I understand the commands that are listed here. Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
Can one of the admins verify this patch? |
Unknown CLA label state. Rechecking for CLA labels. Send feedback to sig-contributor-experience at kubernetes/community. |
There is a chicken and egg problem with calico and containerd when this PR is merged. containerd requires valid CNI config to start up, and calico will use init container to install its cni config file and cni executable. The only way I could workaround this is start with calico cni, wait until it's fully up by using "kubectl wait", delete the non-calico config file from /etc/cni/net.d and restart kube-proxy and coredns. Not sure if other components need restart, as they use host IPs and are static pods anyway. |
hm, in my case I also use an init container to copy the CNI binary and
config to the host and containerd seems to be happy with it. I will test
with Calico next week (PTO this week).
…On Tue, Feb 9, 2021 at 02:58 krezovic ***@***.***> wrote:
There is a chicken and egg problem with calico and containerd when this PR
is merged. containerd requires valid CNI config to start up, and calico
will use init container to install its cni config file and cni executable.
The only way I could workaround this is start with calico cni, wait until
it's fully up by using "kubectl wait", delete the non-calico config file
from /etc/cni/net.d and restart kube-proxy and coredns.
Not sure if other components need restart, as they use host IPs and are
static pods anyway.
—
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
<#10384 (comment)>,
or unsubscribe
<https://github.com/notifications/unsubscribe-auth/ABUFT4ICL74IC6MUQYMBRL3S6EIMRANCNFSM4XF6SJQA>
.
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Why is it trying to use the wrong CNI, and starting with podman instead of the desired one ?
It would be better if it would just fail for a while, until the correct network is available.
1b6b77e
to
8ceb92f
Compare
as stated here (step no. 8.):
also, here are the steps used to initialise the control plane - maybe the issue could be in this part specifically:
|
form Kubelet point of view the CNI is "installed" as soon as there is a cni config in /etc/cni/net.d. Now if you specify --cni parameter in minikube, initially the only cni config file in that directory is the podman cni config. The third party cni file will eventually land there but in between the coredns pod comes up and kubelet calls the podman CNI and the coredns pod gets an ip address from podman and not the third party cni. |
any update? |
any update? |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thank you for doing this! The actual change to CNI looks good. I've added some minor quibbles, but think that overall that this can be merged quickly.
One weird thing has happened to this PR though: it seems to include all sorts of unrelated commits from master:
Typically these show as a single merge-to-head commit, but in this PR, they are showing off as individual commits. I'm not sure if it's easier for you to remove the unrelated commits or start a new PR, but here is the workflow I use for merging against head after I've sent a PR in:
git remote add upstream https://github.com/kubernetes/minikube
git fetch upstream; \
and git checkout master; \
and git merge upstream/master; \
and git checkout $branch; \
and git merge master
I do this often enough that I've made it into a shell function.
Thank you again for contributing this improvement!
a3eb86d
to
fa825b3
Compare
@tstromberg thanks for the thorough review! I think I addressed all comments. Can you please take a look? |
/ok-to-test |
kvm2 Driver |
d44b06f
to
ba244f3
Compare
ba244f3
to
d03e496
Compare
[APPROVALNOTIFIER] This PR is NOT APPROVED This pull-request has been approved by: michaelhenkel The full list of commands accepted by this bot can be found here.
Needs approval from an approver in each of these files:
Approvers can indicate their approval by writing |
kvm2 driver with docker runtime
Times for minikube start: 57.4s 56.2s 51.7s 57.2s 54.1s Times for minikube ingress: 41.9s 36.9s 36.9s 44.4s 35.4s docker driver with docker runtime
Times for minikube start: 26.7s 22.2s 22.2s 22.5s 23.1s Times for minikube ingress: 32.5s 34.6s 34.1s 37.0s 33.6s docker driver with containerd runtime |
@michaelhenkel: PR needs rebase. Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
this might not be needed anymore after @prezha PRs |
When a custom CNI is defined via --cni flag it might take a while to populate the CNIs configuration into /etc/cni/net.d.
Minikube comes up and in case of crio or containerd runtime it has a CNI configuration in /etc/cni/net,d already.
Now if pods using pod network (such as coredns) come up BEFORE the custom CNIs config file is populated, this pod ip address is served by podman. Subsequent pods will be served by the custom CNI but won't be able to communicate with the pods using podman CNI.
This PR removes all CNI configs from /etc/cni/net.d just before the custom CNI manifest is applied.