-
Notifications
You must be signed in to change notification settings - Fork 4.9k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
minikube --cni does not delete the origin cni config file #14536
Comments
This is a known issue, the podman files conflict with the kubernetes files. They are supposed to not be used, by setting a different cni configuration. But it would be better to change the podman configuration, to use another directory for the podman cni. Especially with 1.24, when cni config works "differently" |
The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs. This bot triages issues and PRs according to the following rules:
You can:
Please send feedback to sig-contributor-experience at kubernetes/community. /lifecycle stale |
/remove-lifecycle stale |
The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs. This bot triages issues and PRs according to the following rules:
You can:
Please send feedback to sig-contributor-experience at kubernetes/community. /lifecycle stale |
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues. This bot triages un-triaged issues according to the following rules:
You can:
Please send feedback to sig-contributor-experience at kubernetes/community. /lifecycle rotten |
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs. This bot triages issues according to the following rules:
You can:
Please send feedback to sig-contributor-experience at kubernetes/community. /close not-planned |
@k8s-triage-robot: Closing this issue, marking it as "Not Planned". In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
What Happened?
minikube start --driver kvm2 --registry-mirror=https://registry.docker-cn.com --image-mirror-country cn --extra-config=kubelet.cgroup-driver=systemd --kubernetes-version=v1.18.8 -n 1 --cni=kubefay.yaml --network-plugin='cni' -v 5
when use the above command to start cluster in specific CNI yaml file, the cluster started with the default cni bridge. The cni config at /etc/cni/net.d/87-podman-bridge.conflist is not deleted. The CNI developer may hope a cluster with out any other cni config file when use --cni flag.
I find it that the 87-podman-bridge.conflist are created at minikube iso build stage, the makefile, deploy/iso/minikube-iso/package/podman/podman.mk , just create the conf file. Is it necessary to make cni conf in podman setting up stage?
Attach the log file
log.txt
Operating System
Ubuntu
Driver
KVM2
The text was updated successfully, but these errors were encountered: