-
Notifications
You must be signed in to change notification settings - Fork 4.9k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Support starting minikube with the Podman driver on NixOS systems #12739
Conversation
Welcome @alias-dev! |
Hi @alias-dev. Thanks for your PR. I'm waiting for a kubernetes member to verify that this patch is reasonable to test. If it is, they should reply with Once the patch is verified, the new status will be reflected by the I understand the commands that are listed here. Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
Can one of the admins verify this patch? |
/ok-to-test |
very interesting PR ! thanks you @alias-dev do u mind sharing an example of NixOS system error before and after this PR ? since I can not verify this myself |
kvm2 driver with docker runtime
Times for minikube start: 49.1s 45.9s 45.5s 46.4s 46.8s Times for minikube (PR 12739) ingress: 31.7s 32.2s 30.7s 30.8s 31.3s docker driver with docker runtime
Times for minikube ingress: 25.9s 26.9s 26.9s 26.4s 27.4s Times for minikube (PR 12739) start: 20.6s 21.5s 21.6s 20.7s 21.8s docker driver with containerd runtime
Times for minikube (PR 12739) start: 29.0s 43.4s 42.9s 44.2s 43.6s Times for minikube (PR 12739) ingress: 30.4s 32.9s 26.9s 32.9s 61.9s |
These are the flake rates of all failed tests.
To see the flake rates of all tests by environment, click here. |
Thanks @medyagh. Without this change we get a
And with it, the
The error with the storage provisioner is due to the manifest being generated without an image tag, so I don't think that's related to this change: # /etc/kubernetes/addons/storage-provisioner.yaml
...
---
apiVersion: v1
kind: Pod
metadata:
name: storage-provisioner
namespace: kube-system
labels:
integration-test: storage-provisioner
addonmanager.kubernetes.io/mode: Reconcile
spec:
serviceAccountName: storage-provisioner
hostNetwork: true
containers:
- name: storage-provisioner
image: gcr.io/k8s-minikube/storage-provisioner:
command: ["/storage-provisioner"]
imagePullPolicy: IfNotPresent
volumeMounts:
- mountPath: /tmp
name: tmp
volumes:
- name: tmp
hostPath:
path: /tmp
type: Directory |
does the provsioner error happen even if u do a |
please also fix the lint issues in https://github.com/kubernetes/minikube/pull/12739/checks?check_run_id=3945413490 |
btw this is the line in our code that generates storage provisioner: deploy/addons/storage-provisioner/storage-provisioner.yaml.tmpl
maybe we should fix it in a way that if there is no Tag, it should not have the the ":" |
@medyagh My mistake! I'd built it with minikube/pkg/version/version.go Line 37 in 038effb
Building with the make target, it now runs OK:
|
@medyagh Regarding the lint error, it seems I've just nudged that function over the complexity threshold. I had considered a simpler implementation of this using a |
yes letsfix the lint !or could move the implementation details into helper func. |
kvm2 driver with docker runtime
Times for minikube start: 50.8s 47.4s 48.3s 47.1s 49.3s Times for minikube ingress: 32.4s 30.9s 31.3s 31.4s 30.8s docker driver with docker runtime
Times for minikube start: 22.8s 22.1s 22.2s 22.2s 22.6s Times for minikube ingress: 26.0s 27.0s 26.5s 34.0s 27.5s docker driver with containerd runtime
Times for minikube start: 25.1s 41.7s 41.1s 42.4s 42.2s Times for minikube (PR 12739) ingress: 61.5s 32.5s 28.5s 32.4s 28.9s |
These are the flake rates of all failed tests.
Too many tests failed - See test logs for more details. To see the flake rates of all tests by environment, click here. |
kvm2 driver with docker runtime
Times for minikube start: 50.8s 47.1s 49.1s 48.4s 46.8s Times for minikube (PR 12739) ingress: 31.9s 32.3s 31.9s 32.3s 32.4s docker driver with docker runtime
Times for minikube (PR 12739) start: 22.4s 22.7s 21.9s 22.2s 23.8s Times for minikube ingress: 29.0s 27.5s 27.5s 35.5s 27.5s docker driver with containerd runtime
Times for minikube start: 25.4s 25.8s 42.0s 36.9s 41.9s Times for minikube ingress: 32.9s 34.4s 32.5s 28.8s 32.9s |
@medyagh Sorry it took me a while to come back to this. Is this looking OK now? |
kvm2 driver with docker runtime
Times for minikube start: 49.7s 47.2s 47.2s 46.7s 47.9s Times for minikube ingress: 31.8s 31.3s 31.3s 31.3s 31.8s docker driver with docker runtime
Times for minikube ingress: 31.4s 26.9s 35.4s 27.4s 34.0s Times for minikube (PR 12739) start: 20.8s 21.6s 21.4s 22.1s 21.9s docker driver with containerd runtime
Times for minikube start: 29.5s 41.3s 42.3s 41.4s 29.7s Times for minikube ingress: 32.4s 32.9s 18.9s 32.9s 28.0s |
These are the flake rates of all failed tests.
Too many tests failed - See test logs for more details. To see the flake rates of all tests by environment, click here. |
These are the flake rates of all failed tests.
Too many tests failed - See test logs for more details. To see the flake rates of all tests by environment, click here. |
I ran into the same problem today with minikube v1.24.0 on NixOS 21.11. Is there anything missing from this PR? Maybe I can help |
kvm2 driver with docker runtime
Times for minikube start: 46.0s 44.3s 44.7s 43.6s 44.0s Times for minikube ingress: 26.1s 26.5s 28.6s 28.6s 28.6s docker driver with docker runtime
Times for minikube start: 29.7s 26.3s 26.3s 25.8s 26.4s Times for minikube ingress: 22.9s 22.0s 21.9s 23.4s 21.9s docker driver with containerd runtime
Times for minikube ingress: 19.9s 22.9s 18.9s 22.9s 19.4s Times for minikube start: 34.2s 41.4s 45.2s 41.4s 45.6s |
These are the flake rates of all failed tests.
Too many tests failed - See test logs for more details. To see the flake rates of all tests by environment, click here. |
Hi @medyagh, do you think this PR is ready to merge? |
The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs. This bot triages issues and PRs according to the following rules:
You can:
Please send feedback to sig-contributor-experience at kubernetes/community. /lifecycle stale |
kvm2 driver with docker runtime
Times for minikube ingress: 29.1s 29.1s 26.1s 26.5s 30.1s Times for minikube (PR 12739) start: 50.5s 51.6s 50.0s 51.5s 50.2s docker driver with docker runtime
Times for minikube start: 28.7s 23.9s 24.6s 23.8s 24.8s Times for minikube ingress: 22.9s 22.4s 23.0s 22.9s 23.0s docker driver with containerd runtime
Times for minikube start: 34.9s 28.9s 28.4s 29.1s 32.3s Times for minikube ingress: 18.9s 32.4s 22.4s 21.9s 18.9s |
These are the flake rates of all failed tests.
Too many tests failed - See test logs for more details. To see the flake rates of all tests by environment, click here. |
kvm2 driver with docker runtime
Times for minikube start: 52.2s 50.6s 50.4s 51.3s 49.1s Times for minikube ingress: 29.1s 29.6s 30.1s 25.1s 30.6s docker driver with docker runtime
Times for minikube start: 28.4s 23.9s 24.3s 24.8s 25.2s Times for minikube (PR 12739) ingress: 21.4s 21.9s 23.9s 22.4s 22.9s docker driver with containerd runtime
Times for minikube ingress: 22.4s 17.9s 31.9s 31.9s 22.4s Times for minikube start: 30.1s 34.0s 28.6s 29.4s 28.7s |
These are the flake rates of all failed tests.
Too many tests failed - See test logs for more details. To see the flake rates of all tests by environment, click here. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks for the PR @alias-dev!
[APPROVALNOTIFIER] This PR is APPROVED This pull-request has been approved by: afbjorklund, alias-dev, spowelljr The full list of commands accepted by this bot can be found here. The pull request process is described here
Needs approval from an approver in each of these files:
Approvers can indicate their approval by writing |
Checks both the standard kernel modules path at
/lib/modules
, and the/run/current-system/kernel-modules/lib/modules
path present on NixOS systems before mounting.fixes #12738